academic_integrity
Stony Brook's Academic Integrity Policy lists representing work generated by artificial intelligence as one's own work as an example of academic dishonesty.
Open, evidence-backed AI policy records for public reuse.
Change log
Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.
Current public record freshness and review state.
Stony Brook University, State University of New York currently has 7 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 16, 2026.
This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.
Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.
Inserted lines represent current public claim and evidence records in the source-backed dataset.
7 claim records
Stony Brook's Academic Integrity Policy lists representing work generated by artificial intelligence as one's own work as an example of academic dishonesty.
Stony Brook guidance says generative AI use in coursework can be prohibited, allowed, or required depending on the course or assignment, and students should consult course policies or instructors when unsure.
Stony Brook guidance warns users not to enter sensitive, personal, or proprietary information into generative AI tools without understanding the protections provided by the tool.
Stony Brook's central generative AI FAQ states that the University does not currently have an AI policy and is reviewing existing policies for generative AI implications.
Stony Brook DoIT maintains an AI Tools directory that identifies available tools such as Copilot, Gemini, NotebookLM, Turnitin, and Zoom AI Companion, while noting that listed tools may not be used with HIPAA data.
Stony Brook CELT guidance advises instructors to discuss AI usage policies clearly, include AI statements in syllabi, and outline which assignments allow or do not allow AI tools.
Stony Brook CELT guidance says users should review AI-generated content for accuracy because AI tools can produce biased, illogical, false, or nonexistent-source outputs.
5 source attributions
official_guidance checked May 16, 2026
official_guidance checked May 16, 2026
official_guidance checked May 16, 2026
official_guidance checked May 16, 2026
official_guidance checked May 16, 2026