The case is deeply unsettling and raises questions that go far beyond a simple corporate misjudgment.
In February 2026, 18-year-old Jesse Van Rootselaar opened fire at a secondary school in Tumbler Ridge, British Columbia, killing six people on-site, along with his own mother and 11-year-old brother at a nearby residence. Eight people died in total. CBS News
What makes the case even graver is what came to light afterward: Van Rootselaar's ChatGPT account had been internally flagged in June 2025, eight months before the massacre, for use related to violent activities, and was subsequently suspended. Al Jazeera OpenAI chose not to alert the authorities. The justification given was that the behavior identified did not meet the threshold of a credible and imminent threat.
Sam Altman issued a public apology this week, acknowledging in a letter addressed to Tumbler Ridge residents that the company should have alerted authorities when the account was suspended. Al Jazeera The words sounded sincere — but a letter of condolences does not bring anyone back.
What makes this case even harder to defend is a detail revealed in a lawsuit: some of OpenAI's own employees had internally flagged the conversations as potentially dangerous and recommended notifying law enforcement, but the suggestion was rejected. Furthermore, after the ban, the user was reportedly able to create a second account and continue similar conversations. IANS News
This was not just an automated system failure. There were humans who saw it, assessed it, and chose not to act.
OpenAI is not alone in this spiral. In Florida, the state attorney general launched a criminal investigation into the company after concluding that ChatGPT provided "significant advice" to a student accused of a campus shooting in April 2025 that killed two people. CBS News
What this case ultimately exposes is a structural tension the AI industry has yet to resolve: how to balance user privacy with the responsibility to prevent real-world harm. The "non-imminent threat" defence may be legally tenable — but after eight deaths, it becomes very difficult to sustain morally.
The case is deeply unsettling and raises questions that go far beyond a simple corporate misjudgment. In February 2026, 18-year-old Jesse Van Rootselaar opened fire at a secondary school in Tumbler Ridge, British Columbia, killing six people on-site, along with his own mother and 11-year-old brother at a nearby residence. Eight people died in total. CBS News What makes the case even graver is what came to light afterward: Van Rootselaar's ChatGPT account had been internally flagged in June 2025, eight months before the massacre, for use related to violent activities, and was subsequently suspended. Al Jazeera OpenAI chose not to alert the authorities. The justification given was that the behavior identified did not meet the threshold of a credible and imminent threat. Sam Altman issued a public apology this week, acknowledging in a letter addressed to Tumbler Ridge residents that the company should have alerted authorities when the account was suspended. Al Jazeera The words sounded sincere — but a letter of condolences does not bring anyone back. What makes this case even harder to defend is a detail revealed in a lawsuit: some of OpenAI's own employees had internally flagged the conversations as potentially dangerous and recommended notifying law enforcement, but the suggestion was rejected. Furthermore, after the ban, the user was reportedly able to create a second account and continue similar conversations. IANS News This was not just an automated system failure. There were humans who saw it, assessed it, and chose not to act. OpenAI is not alone in this spiral. In Florida, the state attorney general launched a criminal investigation into the company after concluding that ChatGPT provided "significant advice" to a student accused of a campus shooting in April 2025 that killed two people. CBS News What this case ultimately exposes is a structural tension the AI industry has yet to resolve: how to balance user privacy with the responsibility to prevent real-world harm. The "non-imminent threat" defence may be legally tenable — but after eight deaths, it becomes very difficult to sustain morally.