TORONTO (Journos News) – OpenAI has confirmed it reviewed the account of a Canadian user months before he carried out one of the country’s deadliest recent school shootings, but determined at the time that the activity did not warrant a referral to law enforcement.
The disclosure has renewed scrutiny of how technology companies assess potential threats on their platforms and when they escalate concerns to authorities. It also underscores the limits of current detection systems in distinguishing between troubling content and credible, imminent plans for violence.
OpenAI said it identified the account of Jesse Van Rootselaar in June 2025 through its internal abuse detection systems for what it described as the “furtherance of violent activities.” After reviewing the material, the company banned the account for violating its usage policies but concluded that it did not meet the threshold for notifying police.
The case has since become part of a broader debate over the responsibilities of artificial intelligence platforms in preventing real-world harm.
OpenAI’s review process and referral threshold
According to the company, the standard for referring a user to law enforcement is whether there is an imminent and credible risk of serious physical harm to others. In this instance, OpenAI said it did not identify evidence of specific or immediate planning that would justify contacting authorities.
The account was banned in June 2025.
After learning of last week’s shooting in British Columbia, OpenAI said its staff proactively contacted the Royal Canadian Mounted Police to share information about the individual’s use of ChatGPT and to support the investigation.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said, adding that the company would continue to cooperate with investigators.
The revelation was first reported by The Wall Street Journal.
Police investigation under way
Staff Sgt. Kris Clark of the RCMP confirmed in an emailed statement that OpenAI reached out to law enforcement after the attack. He said investigators are conducting a “thorough review” of digital evidence, including electronic devices, social media activity and other online records associated with the suspect.
Clark said digital and physical evidence is being collected and methodically processed as part of the ongoing inquiry.
Authorities say the 18-year-old first killed his mother and stepbrother at the family home before going to a nearby school in Tumbler Ridge, a remote town of about 2,700 people in northeastern British Columbia. Police said the victims included a 39-year-old teaching assistant and five students aged 12 and 13. The suspect later died from a self-inflicted gunshot wound.
Investigators have not publicly identified a motive. Police said the suspect had previous contacts with authorities related to mental health concerns.
A remote community in shock
Tumbler Ridge lies more than 1,000 kilometers northeast of Vancouver, near the Alberta border, in a mountainous region of the Canadian Rockies. The small community is tightly knit, and the attack has reverberated across the province and the country.
The shooting is Canada’s deadliest mass killing since 2020, when a gunman in Nova Scotia killed 13 people and set fires that led to nine additional deaths.
While Canada has experienced fewer mass shootings than some other countries, high-profile attacks have prompted periodic national debate over firearms regulation, public safety and mental health services. The latest tragedy is likely to renew discussion about prevention measures, including the role of online platforms in identifying warning signs.
Technology platforms under scrutiny
The disclosure by OpenAI places fresh attention on how artificial intelligence companies monitor user activity for policy violations and potential threats. Major technology firms typically rely on automated systems combined with human review to detect content that may signal violence or criminal intent.
OpenAI has said it prohibits the use of its tools to promote or plan violent wrongdoing and that it employs internal review mechanisms to enforce those rules. The company’s statement indicates that, in this case, the content was deemed concerning enough to warrant a ban but not sufficient to demonstrate an imminent threat.
Experts note that assessing intent based on online interactions can be complex. Platforms must balance user privacy, legal standards and the risk of over-reporting, while also responding decisively when credible threats emerge.
For now, investigators in British Columbia are continuing to examine digital evidence as they seek to understand the circumstances leading up to the attack. OpenAI has said it will continue to assist authorities as required.
This article was rewritten by JournosNews.com based on verified reporting from trusted sources. The content has been independently reviewed, fact-checked, and edited for accuracy, neutrality, tone, and global readability in accordance with Google News and AdSense standards.
All opinions, quotes, or statements from contributors, experts, or sourced organizations do not necessarily reflect the views of JournosNews.com. JournosNews.com maintains full editorial independence from any external funders, sponsors, or organizations.
Stay informed with JournosNews.com — your trusted source for verified global reporting and in-depth analysis. Follow us on Google News, BlueSky, and X for real-time updates.










