
What started as a routine software update became one of the most talked about accidental exposures in Silicon Valley history, and the fallout is only just beginning.
It took one misplaced file, one sharp-eyed developer and about 30 minutes for the story to explode across every corner of the internet. On the morning of March 31, 2026, Anthropic, one of the most closely watched AI companies in the world, accidentally published the full source code of Claude Code, its flagship AI developer tool, to a public software registry. By the time most of Silicon Valley had poured its first cup of coffee, the damage was already done.
How a single file brought everything to light
The leak traces back to version 2.1.88 of the @anthropic-ai/claude-code package, pushed to the public npm registry in the early hours of Tuesday morning. Buried inside the update was a 59.8 megabyte JavaScript source map file, the kind of file typically used by developers internally to debug compressed code. In most circumstances it never sees the light of day. This time, it did.
By 4:23 a.m. ET, Chaofan Shou, an intern at Solayer Labs, had spotted the file and posted about it on X, formerly Twitter, attaching a direct download link. The post spread like a lit fuse. Within hours, developers had mirrored the full codebase across GitHub, where a repository cloning the leaked code climbed past 5,000 stars in under half an hour. The complete contents, over 1,900 files and roughly 512,000 lines of TypeScript code, were being picked apart in real time by developers around the world.
For Anthropic, a company currently reporting an annualized revenue run rate of approximately $19 billion as of March 2026, this was not simply an embarrassing morning. Claude Code alone generates an estimated $2.5 billion in annualized recurring revenue, a figure that has more than doubled since the start of the year. Eighty percent of that revenue comes from enterprise clients. What those clients pay for, in part, is the belief that the technology powering their workflows is proprietary and protected. That belief took a serious hit Tuesday morning.
What was actually inside the code
The leak did not just expose lines of software. It exposed the thinking behind one of the most commercially successful AI coding tools ever built, and developers wasted no time digging through it.
Among the most significant discoveries was a three-layer memory architecture that developers say explains why Claude Code performs so reliably over long, complex work sessions. At its center is a lightweight index file that stores pointers to project knowledge rather than the knowledge itself, keeping the AI’s working memory lean and accurate. Supporting that system is a strict discipline that prevents the agent from logging failed attempts into its own context, effectively keeping its internal workspace clean.
The leak also surfaced a feature flagged under the name KAIROS, referenced more than 150 times throughout the source. The name draws from the ancient Greek concept meaning at the right time, and the feature lives up to it. KAIROS appears to be an autonomous background mode that allows Claude Code to keep working even when the user is idle, consolidating memory, resolving contradictions in its understanding of a project and sharpening vague insights into reliable facts. When a user returns to their session, the agent’s context has already been tidied and prepared.
Then there is what developers have taken to calling Undercover Mode. The code contains explicit instructions directing the agent to scrub all traces of its AI origins from public git commit messages when operating in open-source repositories, ensuring that internal Anthropic model names and attributions never surface in public logs.
Internal model names and an uncomfortable performance stat
The source code also offered a rare look at Anthropic’s internal development roadmap. The leak confirms that the internal codename for a Claude 4.6 variant is Capybara, with Opus 4.6 carrying the name Fennec and an as yet unreleased model called Numbat still in testing. Internal comments attached to Capybara’s development reveal that the model’s eighth iteration carries a false claims rate of 29 to 30 percent, notably higher than the 16.7 percent rate recorded in its fourth version. Developers also noted the presence of what is described as an assertiveness counterweight, a mechanism designed to prevent the model from becoming too aggressive when rewriting code.
Not every discovery was heavy. Somewhere inside the 512,000 lines, engineers had quietly built a fully functional virtual pet system, complete with 18 species, rarity tiers, shiny variants and detailed stat tracking. The internet, predictably, had a great deal to say about that.
What users should do right now
The source code exposure carries real security consequences for anyone who installed or updated Claude Code via npm on March 31 between 12:21 a.m. and 3:29 a.m. UTC. A separate supply chain attack on a widely used software package called axios, which occurred in the same window, may have introduced malicious code into some installations. Users who updated during that period should check their project lock files for axios versions 1.14.1 or 0.30.4 and a dependency called plain-crypto-js. Any machine where these are found should be treated as fully compromised, with all credentials rotated and a clean system reinstall performed.
Beyond that immediate concern, Anthropic has designated its native installer as the recommended installation method going forward, specifically because it bypasses the npm dependency chain entirely. Users still on npm should uninstall version 2.1.88 and revert to 2.1.86. Rotating Anthropic API keys and monitoring account usage for unusual activity is also strongly advised, particularly for anyone running the tool inside unfamiliar or recently cloned code repositories.
A field forever changed
The code is out, it has been mirrored across the internet and no takedown notice will put it back. What Anthropic built inside Claude Code turns out to be far more than a language model wrapped in a command-line interface. It is a sophisticated, multi-threaded system for software engineering, and its architecture is now available for any competitor willing to study it. The race to build the next generation of autonomous AI coding agents just got a great deal more complicated for the company that was leading it.
SOURCE: 36 Kr