Change log

Universidad de Chile

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

Universidad de Chile currently has 5 source-backed claim records and 2 official source attributions. Latest tracked changed date: May 15, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

Universidad de Chile current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+10-0
11 # Universidad de Chile AI policy record
2+source_status: The Faculty of Medicine's Humanizar la Inteligencia document describes itself as an orienting guide and not as binding regulation.
3+Evidence (es, c9ef7aad4897): Se trata de un documento de trabajo, en construcción y sujeto a revisión permanente, concebido como una guía orientadora y no como una normativa vinculante.
4+ai_tool_treatment: The Faculty of Medicine guide proposes five principles for AI initiatives in that faculty: transparency and traceability, human supervision and non-delegation of critical judgment, equity and technological justice, academic integrity and responsible authorship, and participatory governance with continuous updating.
5+Evidence (es, c9ef7aad4897): se proponen cinco principios fundamentales... transparencia y trazabilidad... supervisión humana y no delegación del juicio crítico... equidad y justicia tecnológica... integridad académica y autoría responsable... gobernanza participativa y actualización continua
6+research: For health research publications, the Faculty of Medicine guide suggests declaring the type, timing and purpose of AI use, never attributing authorship to an AI tool, and validating AI-assisted production through human authors.
7+Evidence (es, c9ef7aad4897): TRANSPARENCIA EN EL USO: declarar de forma explícita el tipo, el momento y el propósito del uso de herramientas de IA... DISTINCIÓN ENTRE AUTORÍA Y ASISTENCIA: nunca atribuir autoría a una herramienta de IA... SUPERVISIÓN HUMANA: toda producción asistida por IA debe ser validada por los autores
8+teaching: The FCFM guidance recommends that course teaching teams build a transparency policy with students around possible generative AI uses, including defining when and how the AI tool used should be cited.
9+Evidence (es, d21e6f746c3d): Construir una política de transparencia al interior del curso, en la cual se consensúen los posibles usos junto a los y las estudiantes. Por ejemplo, definir cuándo y cómo procede la citación de la herramienta IA utilizada.
10+privacy: The FCFM guidance warns teaching teams that inadequate use of generative AI can create ethics risks such as plagiarism and copyright issues, and security/privacy risks such as insufficient data protection or data use without consent.
11+Evidence (es, d21e6f746c3d): Ética. En este aspecto, usos inadecuados de estas herramientas podrían facilitar acciones relacionadas con el plagio, violaciones de derechos de autor y propiedad intelectual... Seguridad y privacidad. Algunos riesgos a tener en consideración son la protección insuficiente de datos... uso de datos sin consentimiento...

Claim changes

5 claim records

research

For health research publications, the Faculty of Medicine guide suggests declaring the type, timing and purpose of AI use, never attributing authorship to an AI tool, and validating AI-assisted production through human authors.

Review: Agent reviewedConfidence92%Evidence1Languageses

ai_tool_treatment

The Faculty of Medicine guide proposes five principles for AI initiatives in that faculty: transparency and traceability, human supervision and non-delegation of critical judgment, equity and technological justice, academic integrity and responsible authorship, and participatory governance with continuous updating.

Review: Agent reviewedConfidence93%Evidence1Languageses

source_status

The Faculty of Medicine's Humanizar la Inteligencia document describes itself as an orienting guide and not as binding regulation.

Review: Agent reviewedConfidence94%Evidence1Languageses

privacy

The FCFM guidance warns teaching teams that inadequate use of generative AI can create ethics risks such as plagiarism and copyright issues, and security/privacy risks such as insufficient data protection or data use without consent.

Review: Agent reviewedConfidence90%Evidence1Languageses

teaching

The FCFM guidance recommends that course teaching teams build a transparency policy with students around possible generative AI uses, including defining when and how the AI tool used should be cited.

Review: Agent reviewedConfidence91%Evidence1Languageses

Source snapshots

2 source attributions