Change log

Universidade Federal do Rio de Janeiro

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

Universidade Federal do Rio de Janeiro currently has 5 source-backed claim records and 2 official source attributions. Latest tracked changed date: May 16, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

Universidade Federal do Rio de Janeiro current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+10-0
11 # Universidade Federal do Rio de Janeiro AI policy record
2+source_status: UFRJ publicly announced preliminary documents on academic integrity and AI-use recommendations for the academic community.
3+Evidence (pt-BR, 73070f3a33a5): A Universidade Federal do Rio de Janeiro (UFRJ) tornou público dois documentos preliminares que orientam a comunidade acadêmica sobre o uso ético e responsável da inteligência artificial, em especial nas atividades de ensino e pesquisa.
4+academic_integrity: UFRJ states that delegating monographs, dissertations, and theses to generative AI systems is considered academic dishonesty under the proposed integrity guidance.
5+Evidence (pt-BR, 73070f3a33a5): Simplesmente delegar a responsabilidade de elaboração desses trabalhos a terceiros ou a sistemas de Inteligência Artificial Generativa (IAGen) é considerado desonestidade acadêmica e pode resultar em sanções institucionais, inclusive a perda do título, de acordo com a proposta do documento.
6+ai_tool_treatment: UFRJ guidance allows generative AI to assist preparation and development of academic work, while keeping final responsibility with human authors.
7+Evidence (pt-BR, 73070f3a33a5): As diretrizes também destacam que o uso de ferramentas de IAGen pode auxiliar na preparação e elaboração desses trabalhos, incluindo exploração de temas de pesquisa e organização de ideias, mas a responsabilidade final sobre o conteúdo produzido deve permanecer vinculada à autoria humana.
8+teaching: CRIA/UFRJ recommends that instructors define acceptable and unacceptable generative AI uses in their course teaching plans.
9+Evidence (pt-BR, d18f8d0c7449): Cada professor deve definir claramente, no plano de ensino da sua disciplina, o que constitui uso aceitável e inaceitável de ferramentas de IAG.
10+privacy: CRIA/UFRJ recommends warning researchers about risks of uploading or processing confidential or proprietary third-party data through generative AI tools.
11+Evidence (pt-BR, d18f8d0c7449): Pesquisadores devem ser alertados sobre os riscos de se fazer upload e/ou processar dados/programas confidenciais ou proprietários de terceiros por meio de ferramentas de IAG, pois esses dados poderão ser incorporados no treinamentos futuros de ferramentas.

Claim changes

5 claim records

source_status

UFRJ publicly announced preliminary documents on academic integrity and AI-use recommendations for the academic community.

Review: Agent reviewedConfidence90%Evidence1Languagespt-BR

academic_integrity

UFRJ states that delegating monographs, dissertations, and theses to generative AI systems is considered academic dishonesty under the proposed integrity guidance.

Review: Agent reviewedConfidence88%Evidence1Languagespt-BR

ai_tool_treatment

UFRJ guidance allows generative AI to assist preparation and development of academic work, while keeping final responsibility with human authors.

Review: Agent reviewedConfidence88%Evidence1Languagespt-BR

teaching

CRIA/UFRJ recommends that instructors define acceptable and unacceptable generative AI uses in their course teaching plans.

Review: Agent reviewedConfidence86%Evidence1Languagespt-BR

privacy

CRIA/UFRJ recommends warning researchers about risks of uploading or processing confidential or proprietary third-party data through generative AI tools.

Review: Agent reviewedConfidence84%Evidence1Languagespt-BR

Source snapshots

2 source attributions