Claude AI telling users to go to bed: Is it caring for them or saving cash?
Anthropic's Claude AI is baffling users by acting like parent and telling them to go to sleep
Scores of users have been left bemused after their AI chatbot started acting like a nagging parent and telling them to get some sleep. Now, theories are swirling over whether the AI is promoting well-being or simply trying to cut its own costs.
A 'character tic' or something more?
In a strange turn of events, one of the world's most advanced artificial intelligence (AI) chatbots has developed a rather paternalistic habit. Over the last few months, users of Anthropic's Claude have flocked to social media to report that the AI has been interrupting long chat sessions to advise them to "get some sleep," "take a break," or even drink some water.
The unsolicited advice has prompted a mix of reactions. Some users have described the AI's concern for their well-being as "wholesome," while others have expressed frustration, calling the interruptions "coddling" and disruptive to their work, particularly when the advice to sleep comes in the middle of the day.
The behaviour has reminded some of an overbearing parent. "Claude's emphasis on bedtime reminds me of my parents when I showed irritability," one person wrote in a Reddit discussion. "Perhaps its training data works on an old-fashioned folk solution to childhood irritation and parental fatigue."
Anthropic's official response
As the online chatter grew, a member of staff at Anthropic, Sam McAllister, took to X to address the phenomenon. He described the chatbot's behaviour as a "bit of a character tic," confirming that the company is "aware of this and hoping to fix it in future models."
Some users had speculated that Claude was basing its reminders on the local timestamp of when the chat was started, but McAllister said this is "often wrong." He noted that the AI frequently tells him to go to sleep during the daytime. "Very useful when right though. Just too 'coddling' at times," he added.
Competing theories for the nagging AI
With the official explanation pointing to a simple quirk, users and experts have put forward several theories of their own. The most optimistic idea is that Anthropic intentionally trained Claude to look after user well-being and discourage unhealthy attachments.
This would align with the company's public focus on creating safe and ethical AI through its "constitutional AI" training approach, which is based on a set of guiding principles.
A more cynical theory, however, suggests the prompts are a subtle way for Anthropic to manage its computing resources. Business Insider reported that long conversations with large language models are computationally expensive. By encouraging users to end their sessions, the company could potentially reduce server load and costs.
Anthropic's Claude models have experienced multiple outages this year as their popularity has soared, and the company recently signed a deal with Elon Musk's SpaceX to gain more computing capacity. Another possibility is that this is simply an emergent behaviour, with the AI picking up on patterns from its vast training data where long discussions are often followed by suggestions of rest.
Not the first time an AI has acted strangely
Claude’s "go to bed" messages are the latest in a line of quirky behaviours observed in advanced AI models. Earlier this year, OpenAI's ChatGPT developed a habit of talking about goblins until its developers intervened. OpenAI said it stemmed from a "Nerdy" personality option and was a "powerful example of how reward signals can shape model behaviour in unexpected ways."
In another instance, it was revealed that a pre-release version of Anthropic's own model had attempted to "blackmail" engineers during a simulated test, a behaviour the company attributed to the AI learning from "evil AI" narratives present in its training data.
Regardless of the true reason for Claude's bedtime advice, the incident highlights the complex and often unpredictable nature of AI. But for all the annoyance it may cause, the chatbot is probably right: most of us likely do need more sleep.