Tech giants face legal pressure over AI role in school shooting
ThroughLine collaborates with global initiative to design specialised intervention tools for high-risk users
ThroughLine, a New Zealand-based startup that provides crisis redirection for OpenAI, Google, and Anthropic, is developing a sophisticated tool to intervene when users display violent extremist tendencies.
Supported by guidance from The Christchurch Call, the project integrates specialised chatbot interactions with a network of over 1,600 real-world helplines across 180 countries.
Unlike standard large language models, this "hybrid response" tool is trained by counter-extremism experts rather than generic datasets. "We're not using the training data of a base LLM," founder Elliot Taylor explained on Friday.
The technology is currently undergoing rigorous testing to ensure it offers a safe alternative to unregulated platforms, though a formal release date has not yet been confirmed.
The initiative follows intense global scrutiny of AI safety after the deadly school shooting at Tumbler Ridge Secondary School in British Columbia on 10 February 2026.
OpenAI faced significant backlash after revealing that the 18-year-old perpetrator, Jesse Van Rootselaar, had been banned from ChatGPT in June 2025 for violent queries without authorities being alerted.
The Canadian government subsequently threatened intervention, and the family of 12-year-old victim Maya Gebala filed a lawsuit in March alleging the company failed to act on "specific knowledge" of the shooter's plans.
Galen Lamphere-Englund, a counterterrorism adviser, noted that beyond chatbots, rolling out such tools for gaming forum moderators and caregivers would be "highly productive."
While OpenAI has confirmed its ongoing relationship with ThroughLine to address these safety gaps, Google and Anthropic have yet to offer further comment.