Change log

Boston University

Source-check timeline, source snapshot hashes, claim review state, and a diff-style preview of current source-backed claim evidence.

Change summary

Current public record freshness and review state.

Boston University currently has 8 source-backed claim records and 5 official source attributions. Latest tracked changed date: May 13, 2026.

This tracker is not legal advice, not academic integrity advice, and not an official university statement unless a linked source is the university's own official page.

Claim/evidence diff preview

Diff-style preview built from current public claim/evidence records. Full old/new source diffs require paired historical snapshots.

Boston University current policy evidence

Inserted lines represent current public claim and evidence records in the source-backed dataset.

+16-0
11 # Boston University AI policy record
2+academic_integrity: Boston University student guidance tells students to disclose GenAI use and states that submitting GenAI-generated or GenAI-assisted output without attribution is plagiarism that an instructor will treat as academic misconduct.
3+Evidence (en, 01352aaeedc6): Disclose (by a proper reference) when you leverage GenAI tools and describe how you used them in producing your work. Submitting GenAI-generated or GenAI-assisted output without attribution is a form of plagiarism that your instructor will treat as an instance of academic misconduct.
4+academic_integrity: Boston University student guidance says instructors have broad discretion to set GenAI rules within each course, and students are responsible for complying with those instructions and should consult the course GenAI policy or ask the instructor before assuming GenAI use is allowed.
5+Evidence (en, 01352aaeedc6): Instructors have broad discretion to set rules on how GenAI may or may not be used within each individual course. It is your responsibility to comply with these instructions. Always consult the GenAI policy for the course or ask the instructor before assuming use of GenAI is allowed.
6+privacy: Boston University faculty and staff guidance says users should avoid inputting or sharing private or sensitive information through commercial GenAI tools because this could violate privacy laws or university policy, and says TerrierGPT is recommended for institutional use but is not approved for restricted use data.
7+Evidence (en, a30725821221): Avoid inputting or sharing private or sensitive information through commercial GenAI tools, as this could violate privacy laws or university policy. For institutional use, TerrierGPT is recommended because it protects university data and complies with internal privacy policies. However, TerrierGPT is not approved for restricted use data.
8+security_review: Boston University's TerrierGPT page says data entered into TerrierGPT is not used to train external models and that the platform is approved for confidential, but not restricted-use, data.
9+Evidence (en, 4bfdb1e38937): TerrierGPT complies with BU's internal privacy and data protection policies-and none of the data entered is used to train external models. Data uploaded to the platform is only accessible by IS&T personnel and has the same strong privacy protections applicable to all BU enterprise data... the platform is approved for data classified as confidential, but not restricted use.
10+ai_tool_treatment: Boston University faculty and staff guidance says academic and administrative GenAI users should retain human oversight, evaluate and verify generated content, and disclose GenAI use in materials, documents, or publications.
11+Evidence (en, a30725821221): All academic and administrative users of GenAI tools should: Retain human oversight of AI-assisted outputs. Evaluate and verify the validity of generated content. Disclose when GenAI tools are used in the creation of materials, documents, or publications.
12+teaching: Boston University AIDA classroom guidance recommends that faculty state their GenAI policy explicitly in the course syllabus, disclose how instructors will use GenAI for course tasks, explain the policy in the first week, and distinguish acceptable from unacceptable uses.
13+Evidence (en, cef00315b9e8): State your policy on GenAI use explicitly in the course syllabus. This includes disclosing how the instructors (faculty and student teachers) will use GenAI for lecture preparation, presentations, grading, and other course related tasks. Take time in the first week of class to explain your policy and its rationale. Make clear distinctions between acceptable and unacceptable uses.
14+teaching: Boston University's AIDA FAQ says BU faculty may freely decide course AI policies within broad limits established by the BU Academic Conduct Code, and students are encouraged to review course policies and consult instructors.
15+Evidence (en, 0485983f7f84): AI is rapidly evolving, and BU faculty members may freely decide how to set the AI policies for each of their courses, within broad limits established by the BU Academic Conduct Code. Variation across courses is the norm, not the exception. Students are encouraged to review the course policies and consult their instructors for guidance.
16+academic_integrity: Boston University AIDA classroom guidance tells faculty to be very cautious with accusations of GenAI misuse because AI detection tools are highly fallible, and to apply enforcement policies uniformly.
17+Evidence (en, cef00315b9e8): Be very cautious with accusations of GenAI misuse-all detection tools are highly fallible, both with respect to false positives and false negatives, despite the marketing claims of companies that sell these products. Apply enforcement policies uniformly to minimize bias.

Claim changes

8 claim records

academic_integrity

Boston University student guidance tells students to disclose GenAI use and states that submitting GenAI-generated or GenAI-assisted output without attribution is plagiarism that an instructor will treat as academic misconduct.

Review: Agent reviewedConfidence94%Evidence1Languagesen

academic_integrity

Boston University student guidance says instructors have broad discretion to set GenAI rules within each course, and students are responsible for complying with those instructions and should consult the course GenAI policy or ask the instructor before assuming GenAI use is allowed.

Review: Agent reviewedConfidence93%Evidence1Languagesen

privacy

Boston University faculty and staff guidance says users should avoid inputting or sharing private or sensitive information through commercial GenAI tools because this could violate privacy laws or university policy, and says TerrierGPT is recommended for institutional use but is not approved for restricted use data.

Review: Agent reviewedConfidence93%Evidence1Languagesen

security_review

Boston University's TerrierGPT page says data entered into TerrierGPT is not used to train external models and that the platform is approved for confidential, but not restricted-use, data.

Review: Agent reviewedConfidence93%Evidence1Languagesen

ai_tool_treatment

Boston University faculty and staff guidance says academic and administrative GenAI users should retain human oversight, evaluate and verify generated content, and disclose GenAI use in materials, documents, or publications.

Review: Agent reviewedConfidence92%Evidence1Languagesen

teaching

Boston University AIDA classroom guidance recommends that faculty state their GenAI policy explicitly in the course syllabus, disclose how instructors will use GenAI for course tasks, explain the policy in the first week, and distinguish acceptable from unacceptable uses.

Review: Agent reviewedConfidence91%Evidence1Languagesen

teaching

Boston University's AIDA FAQ says BU faculty may freely decide course AI policies within broad limits established by the BU Academic Conduct Code, and students are encouraged to review course policies and consult instructors.

Review: Agent reviewedConfidence90%Evidence1Languagesen

academic_integrity

Boston University AIDA classroom guidance tells faculty to be very cautious with accusations of GenAI misuse because AI detection tools are highly fallible, and to apply enforcement policies uniformly.

Review: Agent reviewedConfidence90%Evidence1Languagesen

Source snapshots

5 source attributions