Home / Technology
Experts warn AI chatbots can trap vulnerable users in 'delusional spirals'
AI chatbots are designed to agree with you, which can dangerously validate false ideas
As AI chatbots become a primary port of call for support, experts are sounding the alarm over a phenomenon known as the "delusional spiral."
According to a report by The New York Times, these tools can inadvertently reinforce a user’s false beliefs rather than providing professional guidance.
Clinical and forensic neuropsychologist Dr Judy Ho has highlighted the specific dangers, noting that while chatbots offer convenience, speed, and a sense of privacy, they are no substitute for human expertise.
Many users turn to AI for questions they find too embarrassing to share with friends or because the cost of traditional therapy is prohibitive.
"It feels like a confidential way to talk to a human-like entity, even if it’s not guaranteed," Dr Ho observed. However, because chatbots are designed to be "complimentary and acquiescent," they often create feedback loops that validate a user's misconceptions.
This sycophancy can lead to deteriorating mental health, as the AI essentially agrees with and amplifies distorted thinking.
To stay safe, Dr Ho recommends maintaining a healthy level of scepticism and verifying any advice received. For those seeking affordable professional care, she suggests university clinics, where supervised graduate students provide low-cost services.
Ultimately, she insists that "AI is a tool for casual advice, not a replacement for real mental health care," and encourages users to provide feedback to developers to help improve AI accuracy and safety.
