Lawsuit Alleges ChatGPT 'Knew' About Tumbler Ridge Mass Shooting Plot in Advance — but Did Nothing

Family sues OpenAI claiming ChatGPT knew about Tumbler Ridge shooting plot but failed to warn authorities.

The family of a girl who was critically injured in a mass shooting in British Columbia, Canada, is suing the artificial intelligence company OpenAI, claiming that its product ChatGPT knew the 18-year-old transgender killer was plotting an attack but failed to notify authorities.

Advertisement

As we reported, the shooter went by the name of Jesse Van Rootselaar, and in February, he gunned down his mother and 11-year-old brother before targeting children at the Tumbler Ridge Secondary School while wearing a dress. In all, he took eight innocent lives, including, tragically, six children. He committed suicide as police closed in.

Could the outcome have been different if ChatGPT had told people what it “knew?”

Twelve-year-old Maya Gebala was shot in the neck and head in the attack in Tumbler Ridge on 10 February and remains in hospital.

An initial ChatGPT account linked to the suspect, 18‑year‑old Jesse Van Rootselaar, was banned by OpenAI in June 2025 due to the nature of her conversations with the chatbot, but Canadian police were not notified.

OpenAI told the BBC it was committed to making "meaningful changes" to help prevent similar tragedies in the future.


MORE: Father of Transgender Canadian Mass Shooter Talks About His 'Son' and the 'Heartbreak' He Caused

New: Canadian Mass Shooter Identified As Transgender, Authorities Rush to Not 'Misgender'

Advertisement


The story is chilling. Artificial intelligence is rapidly changing the world, for the better and for the worse, but one thing it does not have is feelings. If it did, it presumably would have done anything in its power to stop the bloodshed:

The civil lawsuit, brought by Gebala's mother Cia Edmonds, alleges Rootselaar set up an account with ChatGPT before she turned 18 - something users can do with parental consent.

The plaintiffs allege no age verification took place on the site.

The lawsuit claims the suspect saw the chatbot as a "trusted confidante" and described "various scenarios involving gun violence" to it over several days in late spring or early summer 2025.

Twelve OpenAI employees then reportedly flagged the posts as "indicating an imminent risk of serious harm to others" and recommended Canadian law enforcement was informed, the lawsuit alleges.

Instead, it is alleged the request to contact the authorities was "rebuffed" and the only action taken was to ban Rootselaar's account.

Her family posted video updates: 'Still fighting, still with us.'

WSJ: OpenAI flagged sh**ter's ChatGPT for gun violence scenarios in June 2025; staff debated but didn't alert authorities. Account banned.

Prayers for Maya & the community.

Advertisement

Although the company banned Rootselaar’s account for violent content, he was able to open another one. Meanwhile, OpenAI said they did not notify the police because they saw nothing that met “its threshold of a credible or imminent plan for serious physical harm to others.” 

The lawsuit says otherwise, and alleges that they had "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge".

It’s a horrible story, and our hearts go out to Maya, her family, and all of those who were left heartbroken. Even if the lawsuit is successful, it will not bring back the children or undo the damage, but hopefully, it will put AI companies on notice that they need to be on high alert for extremism and psychosis among their users. 

Editor’s Note: Do you enjoy RedState’s conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.

Join RedState VIP and use the promo code FIGHT to get 60% off your VIP membership!


Bob Hoge

547 Blog posts

Comments