Home / Technology
Anthropic confirms accidental source code leak for Claude Code
AI firm Anthropic admits human error led to internal source code exposure on Tuesday
Anthropic has confirmed that internal source code for its popular AI-powered coding assistant, Claude Code, was accidentally leaked during a software update.
A spokesperson for the company told Business Insider on Tuesday that the exposure was the result of a "release packaging issue caused by human error" rather than a malicious security breach.
"Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," the statement clarified.
Despite the company’s swift response, a post on X containing screenshots of the internal data had already amassed over 26 million views by Tuesday evening, sparking significant discussion among rival developers and cybersecurity analysts.
While the underlying AI models remain secure, the exposure of the tool's specific architecture provides competitors with a rare look into how Anthropic built its coding interface.
This incident has raised questions regarding the internal protocols of a firm that consistently markets itself as a leader in AI safety and robust engineering.
The leak follows a period of rapid expansion for the San Francisco-based company, which saw its Claude chatbot briefly reach the top of the US Apple App Store earlier this month.
This growth followed a public split from the Pentagon in February 2026, after CEO Dario Amodei refused to compromise on how Anthropic’s technology should be deployed for military use.
Recent legal developments have also favoured the company, with US District Judge Rita Lin granting a temporary injunction last week to block certain supply chain risk designations.
As Anthropic implements new measures to prevent future packaging errors, industry experts suggest the leak could inadvertently accelerate the development of rival coding assistants.
