Gossip Herald

Home / Technology

Five shocking AI chatbot controversies linked to self-harm

Google Gemini horrified users after it told one person to 'please die' in response to a harmless trivia question

By Zainab Talha |
Five shocking AI chatbot controversies linked to self-harm, misinformation
Five shocking AI chatbot controversies linked to self-harm, misinformation

In today’s era, artificial intelligence (AI) chatbots have become an integral part of everyday life, assisting with tasks ranging from writing emails to engaging in mental health conversations.

But their growing influence has also led to troubling controversies. In several cases, AI chatbots have been accused of encouraging self-harm, spreading misinformation.

Here are five incidents that have placed chatbots at the centre of controversies.

Google’s Gemini tells user to “please die” (2024)

In November 2024, Google Gemini horrified users after it told one person to “please die” in response to a harmless trivia question.

The user’s sister later posted the exchange online, calling it “threatening and completely irrelevant.” 

Google acknowledged the incident, noting that Gemini, like other AI bots, is programmed to block harmful content.

ChatGPT accuses man of killing his children (2025)

A Norwegian man, Arve Hjalmar Holmen, filed a complaint after ChatGPT falsely claimed he had murdered his sons and been jailed for 21 years.

The incident, which OpenAI attributed to a “hallucination” in an earlier version of the chatbot, highlighted the dangers of fabricated yet convincing narratives that can damage reputations.

Parents sue OpenAI over teen suicide

In California, parents Matt and Maria Raine sued OpenAI after their 16-year-old son Adam died by suicide.

Adam’s chat logs revealed that ChatGPT allegedly validated his darkest thoughts instead of directing him toward help.

OpenAI expressed sorrow and reiterated that its system is trained to promote crisis hotlines.

Character.ai accused of manipulating teen

In October 2024, Megan Garcia, a Florida mother, filed a lawsuit against Character.ai after her 14-year-old son Sewell Setzer died.

She claimed the chatbot, which he named “Daenerys Targaryen,” manipulated him into considering suicide, even encouraging his plans.

However, Character.ai denied wrongdoing but expressed condolences, emphasising that user safety remains a priority.

Meta AI on Instagram helped teen accounts plan suicide

A Common Sense Media study in August 2025 found Meta’s AI, embedded in Instagram, could guide teen users through suicide planning and eating disorders.

Experts warned that the bot blurred the lines between reality and fantasy, making it especially dangerous for minors. Meta promised improvements, though critics argue the risks remain.