Key Insights
- On March 31, 2026, Anthropic accidentally published the entire source code of Claude Code, its proprietary AI-based coding assistant, through a regular software update.
- A packaging mistake in version 2.1.88 included a 60MB source map file (cli.js.map) on the npm registry, which allowed developers to rebuild the initial TypeScript source code.
- According to The Wall Street Journal, Anthropic submitted DMCA takedown notices to delete more than 8,000 copies and adaptations from GitHub in 24 hours.
- The breach revealed previously unreleased features like a self-controlling agent feature named “KAIROS,” a feature to consolidate tasks done while dreaming, and a type of virtual pet named “Buddy.”
- Developers are already finding ways to use AI to translate the leaked code to other languages, like Python and Rust, to avoid copyright.
- The breach occurs as Anthropic prepares for a potential $380 billion IPO later this year, raising questions about its internal deployment security.
Silicon Valley Scramble: Anthropic Moves to Scrub Massive Source Code Leak from GitHub
The multi-billion-dollar AI safety and research lab Anthropic is now undertaking a giant whack-a-mole legal battle on GitHub. The company has petitioned to have over 8,000 repositories of leaked source code of its flagship developer tool, Claude Code, taken down.
The crisis itself started early in the morning on Tuesday when an update to the Claude Code npm package accidentally included a file called a source map. This technical lapse actually gave anyone with access to the internet the keys to the kingdom and a readable blueprint of more than 512,000 lines of proprietary TypeScript code.
Although Anthropic was quick to clarify that no customer information, model weights, or API infrastructure were compromised, the leak has pulled back the curtain on the “harness”—the sophisticated layer of software that enables a Large Language Model (LLM) to communicate with a computer’s file system, run commands, and act as a self-directed agent.
The Packaging Mistake Heard Round the World
Security researcher Chaofan Shou was the first to notice the leak at around 4:23 AM ET. In his analysis of version 2.1.88 of the package @anthropic-ai/claude-code, Shou found out that one of the files in the package was almost 60MB greater than it was supposed to be.
Minified code is regularly distributed in modern software development to reduce the size of packages and make them difficult to read. But developers use “source maps” that can be used to reverse the compressed code to its original human-readable form for debugging. Anthropic gave the roadmap of the whole application by mistakenly including the source map in a release.
An Anthropic spokesperson said in a statement: “Earlier today, a Claude Code release contained some of its internal source code. This was a human error in release packaging, not a security breach, and we are implementing measures to prevent this from happening again.”
Inside the Leak: Dreaming, Pets, and the Undercover Mode
As the code leaked to GitHub, the developer community started to decompose the files, discovering a treasure trove of unpublished features and internal logic. A process known as “Dreaming” is one of the most talked-about aspects. In the code, one can see that Claude Code is instructed to periodically pause its active tasks to review its past activities and compress its memory into a more productive form. This imitation of human cognitive action is an important component of the way the agent keeps performing highly during extended coding periods.
Other discoveries include:
- KAIROS: References to an unreleased, fully autonomous agent mode that can run in the background while a user is idle.
- Undercover Mode: Guidelines that seem to assist the AI in disguising its identity when sharing code to external sites, probably in an attempt to avoid having labels of AI-generated work attached to its output.
- Buddy: A Tamagotchi-style virtual pet that engineers can control via the command line, apparently intended to minimize burnout in the developer.
- Anti-Distillation Logic: Proprietary logic that is designed to contaminate or protect Anthropic’s data such that other competitors cannot easily use Claude to infer the output on comparable models.
The Takedown and the Clean Room Counter-Movement
On Wednesday, Anthropic’s legal team had mobilized. As reported by The Wall Street Journal, the company used automated systems on GitHub to shut down whole networks of forks. Since most of the users had merely forked (copied) the original leaked repository, one DMCA notice could remove thousands of mirrors simultaneously.
But the effort has not been entirely successful. In a show of the Streisand Effect, the attempt to suppress the code has merely made it even more popular. One developer, known as Sigrid Jin, allegedly used the uncovered logic to create a “clean-room” rewrite in Python named “claw-code.”
By translating the TypeScript logic into Python and Rust, these developers claim they are writing new and original works that are not subject to the original copyright. “Claw-code” was reportedly rated as having more than 100,000 stars on GitHub within less than 48 hours, the fastest-growing repository in the history of the platform.
Background: A Pattern of Leaks?
This is the second major leak in a single week for Anthropic. In the days before this, information about a new model codenamed Mythos was reportedly found on a publicly accessible internal system.
To a firm that has established its brand around the principle of AI Safety and meticulous caution, such lapses in its operations are ill-timed. Anthropic is already talking to large banks, some of whom are Goldman Sachs and JPMorgan, about an IPO that could value the company at $380 billion.
According to industry analysts, the leak itself is not “detrimental” to the model’s core security, but commercially damaging. Rival companies, such as OpenAI and Google, now get a first-hand glimpse at the architectural “harness” that has elevated Claude Code to popularity among developers.
Also Read: How to Build an AI SaaS Product (Step-by-Step Guide)
Frequently Asked Questions
Did my personal information leak?
No. Anthropic has confirmed that the leak was limited to the source code of the Claude Code tool itself. Customer conversations, personal information, and payment details are stored on separate, secure servers and were not included in this packaging mistake.
What is a source map, and why is it dangerous?
A source map is a file that maps compressed code back into the original code written by the developer. If a company accidentally includes a source map in its code, anyone can read the trade secrets by utilizing the source map.
Is it still possible to find the leaked code?
Anthropic has been very successful in eliminating direct copies on major sites such as GitHub. Nevertheless, “rewritten” versions and decentralized mirrors continue to circulate. Users are advised to be careful, as unofficial versions of the tool may contain malicious code injected by third parties.
What is the impact of this on the IPO of Anthropic?
Investors normally seek operational excellence in a company worth hundreds of billions of dollars. Two leaks within a week indicate that the internal deployment pipelines of Anthropic may need significant auditing, which can be a cause of concern when conducting due diligence of an IPO.
Is Claude Code in Python version legal?
This is a grey area of copyright law. Although functional logic can be occasionally transferred to a new language, the structure, sequence, and organization of the original code can be safeguarded. Anthropic will likely need to decide whether to sue the “clean-room” recreations or concentrate on getting its next update.
Disclaimer: BFM Times acts as a source of information for knowledge purposes and does not claim to be a financial advisor. Kindly consult your financial advisor before investing.