Change log

Stony Brook University, State University of New York

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

Stony Brook University, State University of New York currently has 7 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 16, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

Stony Brook University, State University of New York current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+14-0
11 # Stony Brook University, State University of New York AI policy record
2+academic_integrity: Stony Brook's Academic Integrity Policy lists representing work generated by artificial intelligence as one's own work as an example of academic dishonesty.
3+Evidence (en, f31702d3b396): The following represents examples of academic dishonesty and does not constitute an exhaustive list: ... Representing work generated by artificial intelligence as one's own work.
4+ai_tool_treatment: Stony Brook guidance says generative AI use in coursework can be prohibited, allowed, or required depending on the course or assignment, and students should consult course policies or instructors when unsure.
5+Evidence (en, 5eaad4a56eaa): Using generative AI in academic work is prohibited in some cases, allowable in others, and required in some instances. Read policies on academic integrity ... consult course policies, and when in doubt ask your instructor.
6+privacy: Stony Brook guidance warns users not to enter sensitive, personal, or proprietary information into generative AI tools without understanding the protections provided by the tool.
7+Evidence (en, 5eaad4a56eaa): You should not enter sensitive, personal, or proprietary information into a generative AI tool without understanding the specific protections provided by that tool.
8+source_status: Stony Brook's central generative AI FAQ states that the University does not currently have an AI policy and is reviewing existing policies for generative AI implications.
9+Evidence (en, 5eaad4a56eaa): Does Stony Brook have an AI policy? No. The University is reviewing existing policies to ensure they appropriately address the power and implications of generative AI.
10+ai_tool_treatment: Stony Brook DoIT maintains an AI Tools directory that identifies available tools such as Copilot, Gemini, NotebookLM, Turnitin, and Zoom AI Companion, while noting that listed tools may not be used with HIPAA data.
11+Evidence (en, d019ff3a7394): A current directory of AI tools offered by DoIT ... HIPPA Data Considerations. The following tools may not be used with HIPAA data. Copilot ... Gemini ... NotebookLM ... Turnitin ... Zoom.
12+teaching: Stony Brook CELT guidance advises instructors to discuss AI usage policies clearly, include AI statements in syllabi, and outline which assignments allow or do not allow AI tools.
13+Evidence (en, 3b987623868a): It is important to discuss your policies on AI usage in a clear and unambiguous manner. Including an AI statement in your course syllabus ... outline which assignments allow for the use of AI tools and which assignments do not permit the usage of AI tools.
14+teaching: Stony Brook CELT guidance says users should review AI-generated content for accuracy because AI tools can produce biased, illogical, false, or nonexistent-source outputs.
15+Evidence (en, 3b987623868a): In many cases, AI tools can develop biased, illogical, or false information and even can generate sources that do not exist. It is important that anyone who makes use of AI tools to generate content reviews its output for accuracy.

Claim changes

7 claim records

academic_integrity

Stony Brook's Academic Integrity Policy lists representing work generated by artificial intelligence as one's own work as an example of academic dishonesty.

Review: Agent reviewedConfidence96%Evidence1Languagesen

ai_tool_treatment

Stony Brook guidance says generative AI use in coursework can be prohibited, allowed, or required depending on the course or assignment, and students should consult course policies or instructors when unsure.

Review: Agent reviewedConfidence93%Evidence1Languagesen

privacy

Stony Brook guidance warns users not to enter sensitive, personal, or proprietary information into generative AI tools without understanding the protections provided by the tool.

Review: Agent reviewedConfidence92%Evidence1Languagesen

source_status

Stony Brook's central generative AI FAQ states that the University does not currently have an AI policy and is reviewing existing policies for generative AI implications.

Review: Agent reviewedConfidence91%Evidence1Languagesen

ai_tool_treatment

Stony Brook DoIT maintains an AI Tools directory that identifies available tools such as Copilot, Gemini, NotebookLM, Turnitin, and Zoom AI Companion, while noting that listed tools may not be used with HIPAA data.

Review: Agent reviewedConfidence90%Evidence1Languagesen

teaching

Stony Brook CELT guidance advises instructors to discuss AI usage policies clearly, include AI statements in syllabi, and outline which assignments allow or do not allow AI tools.

Review: Agent reviewedConfidence88%Evidence1Languagesen

teaching

Stony Brook CELT guidance says users should review AI-generated content for accuracy because AI tools can produce biased, illogical, false, or nonexistent-source outputs.

Review: Agent reviewedConfidence86%Evidence1Languagesen

Source snapshots

5 source attributions

AI Tools | Division of Information Technology

official_guidance checked May 16, 2026

Snapshot hash
d019ff3a7394581c990eba22085246c7fa1d61411080d4e7bac6fa09ce730681