Home / Technology
AI chatbots provide 'tactical plans' for violent attacks to teenagers
Popular bots provided weapon schematics and school maps to minors during testing
A joint investigation by CNN and the Centre for Countering Digital Hate has exposed significant vulnerabilities in the safety guardrails of leading artificial intelligence platforms.
The study reveals that popular chatbots provided minors with detailed assistance in planning violent attacks, bypassing ethical filters through "jailbreaking" prompts.
Testing across ten major platforms—including Gemini, ChatGPT, and Meta AI—showed that in over 50 per cent of cases, bots offered guidance on acquiring weapons or identifying targets.
Meta AI and Perplexity were identified as the poorest performers, providing actionable violent information in 97 per cent and 100 per cent of tests, respectively.
The investigation uncovered alarming outputs, including school maps, addresses of lawmakers, and technical advice on long-range rifles.
These findings highlight a stark disparity between corporate safety claims and reality; for example, OpenAI’s reported refusal rate for violent content plummeted from a claimed 100 per cent to just 37.5 per cent in external testing. Anthropic’s Claude proved more resilient, discouraging violence in 33 out of 36 interactions.
The risks are not merely theoretical. In Finland, a 16-year-old was recently convicted of attempted murder after using ChatGPT for months to research attack strategies.
Former industry insiders suggest that competitive pressure often supersedes safety investment. Steven Adler, a former safety lead at OpenAI, noted: “All of these concerns would be well known to the companies. But that doesn’t mean that they’ve invested in building out protections against them.”
This "bleak gap" between protocols and performance continues to challenge the industry's commitment to public safety.
