Change log

University of Granada

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

University of Granada currently has 6 source-backed claim records and 4 official source attributions. Latest tracked changed date: May 16, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

University of Granada current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+12-0
11 # University of Granada AI policy record
2+source_status: UGR has published Council of Government-presented recommendations for AI use, but the document says it is not normative and does not establish what is permitted or prohibited.
3+Evidence (es, 27d8401a3b70): Es importante senalar que este documento de recomendaciones no es un documento normativo que establezca lo que esta permitido y lo que no lo esta. Debe servir de inspiracion a los diferentes organos, centros, unidades y miembros de la UGR.
4+academic_integrity: UGR guidance recommends that students declare AI use in work, take responsibility for the content, avoid plagiarism, and use AI as learning support rather than a substitute for personal effort.
5+Evidence (es, 9cd640cf35ce): Etica e integridad academica: Declarar el uso de la IA en trabajos, asumir la responsabilidad del contenido y evitar el plagio. Uso complementario de la IA: Utilizar la IA como apoyo al aprendizaje, sin sustituir el esfuerzo personal.
6+privacy: UGR guidance recommends avoiding sensitive information in public AI tools, using secure institutional tools, and prioritizing tools provided or contracted by UGR because they undergo privacy and security review.
7+Evidence (es, 9cd640cf35ce): Proteccion de la privacidad: Conocer las normativas de proteccion de datos, evitar compartir informacion sensible y usar herramientas institucionales seguras. En todo caso, se recomienda priorizar el uso de las herramientas de IA aportadas por la Universidad de Granada o de terceros contratadas por la UGR.
8+research: UGR research guidance recommends declaring AI use with technical and methodological details, verifying and supervising AI outputs, assuming authorship responsibility, and protecting data privacy.
9+Evidence (es, 9cd640cf35ce): Investigacion con IA: Transparencia: Declarar el uso de la IA, especificando detalles tecnicos y metodologicos para garantizar la trazabilidad. Integridad y supervision: La IA debe complementar y no sustituir el pensamiento critico. Verificar y supervisar resultados, asumiendo la autoria.
10+ai_tool_treatment: UGR states that PDI and PTGAS can use a protected Copilot chat through the institutional Microsoft account, while public generative AI tools can reuse entered data for model training.
11+Evidence (es, 40be450c356f): La Universidad de Granada proporciona a PDI y PTGAS la oportunidad de utilizar la version protegida del chat Copilot accediendo con la cuenta institucional de Microsoft (@ms.ugr.es). La principal ventaja de esta IA Generativa es que las preguntas y datos que utilicemos para interactuar con el modelo no seran reutilizados posteriormente para reentrenar el modelo.
12+ai_tool_treatment: CEPRUD says it evaluates market AI tools for university uses and tells users to review privacy terms and conditions before using any listed tool.
13+Evidence (es, 2a64e2dc0cf6): CEPRUD evalua herramientas disponibles en el mercado con diferentes utilidades practicas para el ambito universitario. Antes de usar cualquiera de ellas, por favor, revisa los terminos de privacidad asi como las condiciones de uso teniendo en mente las recomendaciones hechas en esta pagina.

Claim changes

6 claim records

ai_tool_treatment

CEPRUD says it evaluates market AI tools for university uses and tells users to review privacy terms and conditions before using any listed tool.

Review: Agent reviewedConfidence82%Evidence1Languageses

research

UGR research guidance recommends declaring AI use with technical and methodological details, verifying and supervising AI outputs, assuming authorship responsibility, and protecting data privacy.

Review: Agent reviewedConfidence90%Evidence1Languageses

ai_tool_treatment

UGR states that PDI and PTGAS can use a protected Copilot chat through the institutional Microsoft account, while public generative AI tools can reuse entered data for model training.

Review: Agent reviewedConfidence88%Evidence1Languageses

privacy

UGR guidance recommends avoiding sensitive information in public AI tools, using secure institutional tools, and prioritizing tools provided or contracted by UGR because they undergo privacy and security review.

Review: Agent reviewedConfidence90%Evidence1Languageses

academic_integrity

UGR guidance recommends that students declare AI use in work, take responsibility for the content, avoid plagiarism, and use AI as learning support rather than a substitute for personal effort.

Review: Agent reviewedConfidence91%Evidence1Languageses

source_status

UGR has published Council of Government-presented recommendations for AI use, but the document says it is not normative and does not establish what is permitted or prohibited.

Review: Agent reviewedConfidence94%Evidence1Languageses

Source snapshots

4 source attributions