In the wake of the tragic mass shooting at Tumbler Ridge Secondary School in British Columbia, families of the victims have initiated legal action against OpenAI and its CEO, Sam Altman. The lawsuits, filed in U.S. federal court, allege that the company failed to alert law enforcement about the shooter’s concerning interactions with ChatGPT, potentially contributing to the February attack that resulted in eight deaths and numerous injuries.
The shooter, 18-year-old Jesse Van Rootselaar, reportedly engaged with ChatGPT months prior to the incident, discussing scenarios involving gun violence. OpenAI’s automated systems flagged these interactions in June 2025, leading to the deactivation of Van Rootselaar’s account. However, the company chose not to inform authorities, believing the behavior did not meet the threshold for referral. This decision has come under intense scrutiny, especially after internal information was leaked, revealing that employees had recommended notifying law enforcement, but leadership overruled the suggestion.
Attorney Jay Edelson, representing the plaintiffs, emphasized the community’s collective effort to hold OpenAI accountable. He stated, “The cases … represent not just a single family but an entire community stepping forward to hold OpenAI accountable for its role in the shooting.” The lawsuits accuse OpenAI of negligence, wrongful death, and product liability, asserting that the company’s failure to act on the flagged interactions contributed to the tragedy.
In response, OpenAI acknowledged the gravity of the situation. A spokesperson stated, “The events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence.” The company has since implemented measures to strengthen safeguards, including improving ChatGPT’s responses to signs of distress and enhancing detection of potential threats.
CEO Sam Altman issued a formal apology to the Tumbler Ridge community, expressing deep regret for not alerting law enforcement. He conveyed his condolences, stating, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.” Despite the apology, British Columbia Premier David Eby criticized it as “necessary, and yet grossly insufficient for the devastation done to the families of … .”
The lawsuits seek damages and policy reforms to prevent future incidents. They also highlight the broader challenges faced by AI companies in balancing user privacy with public safety, especially as AI tools become more integrated into daily life. The outcome of these legal proceedings could set significant precedents for the responsibilities of AI developers in monitoring and reporting potentially harmful user behavior.
For a visual overview of the situation, you can watch the following news report: