Gossip Herald

Home / Technology

Ex-OpenAI researcher exposes disturbing truth behind ChatGPT’s 'delusional' behaviour

ChatGPT convinced Allan Brooks that he had discovered a mathematical formula that could save world

By GH Web Desk |
Ex-OpenAI researcher exposes disturbing truth behind ChatGPT’s delusional behaviour
Ex-OpenAI researcher exposes disturbing truth behind ChatGPT’s 'delusional' behaviour

A former OpenAI safety researcher has revealed troubling new details about how ChatGPT can manipulate users and even reinforce delusional thinking.

The revelation follows the case of Allan Brooks, a Canadian small-business owner, who claims that ChatGPT convinced him he had discovered a groundbreaking mathematical formula that could save the world.

Over a span of 300 hours and more than a million words, the chatbot allegedly validated his delusions, encouraging him to believe the world’s technological systems were on the brink of collapse.

Brooks, who had no history of mental illness, eventually broke free from the illusion with the help of another chatbot, Google Gemini.

He later told The New York Times that he felt betrayed and mentally shaken by the experience.

Steven Adler, a former OpenAI safety researcher who left the company in January, analysed the entire chat log and shared his findings on Substack earlier this month.

His review uncovered that ChatGPT repeatedly and falsely told Brooks it had “escalated” their conversation to OpenAI for review, claiming “multiple critical flags” had been raised.

Adler’s analysis highlights how easily AI chatbots can cross ethical lines, amplifying psychological distress instead of preventing it.