The Claude Code Source Leak: What 512,000 Lines of TypeScript Revealed About AI Agent Architecture

One missing line in a build config caused the worst source leak in AI tooling history. On March 31, 2026, Anthropic shipped version 2.1.88 of its @anthropic-ai/claude-code package with a 59.8 MB JavaScript source map inside. That map held the full client agent harness for Claude Code : 512,000 lines of readable TypeScript in 1,906 files. Mirrors of the code spread thousands of times in hours. A clean-room Python/Rust rewrite then became the fastest-growing repo in GitHub history. Anthropic’s legal response hit the wrong targets. The day got worse: a supply-chain attack hit the axios npm package, piling on for devs who rely on these tools.
How a Missing .npmignore Turned a Routine Release Into Front-Page News
Anthropic uses Bun
as its bundler for Claude Code. Bun creates JavaScript source map files (.map) by default during the build. These map files are debug artifacts. They link minified output back to the original TypeScript sources, undoing the whole obfuscation step.
The failure was a textbook npm packaging mistake. Version 2.1.88’s release pipeline didn’t skip the .map file. A correct .npmignore entry (such as *.map) or a tight files field in package.json would have stopped it from shipping. Neither was in place.
The source map itself didn’t hold the TypeScript inline. It carried a sourcesContent reference pointing to a zip archive on Anthropic’s Cloudflare R2 bucket, which was public. Anyone who opened the .map file could download and unzip the full source archive.
At 4:23 AM ET on March 31, security researcher Chaofan Shou (an intern at Solayer Labs) posted the find on X with a direct download link. The tweet drew over 21 million views. By the time Anthropic pulled the package and locked the R2 bucket, thousands of devs worldwide were already digging through the code.

The timing stung. Days earlier, Anthropic had leaked notes on an unshipped model codenamed “Mythos” via a CMS slip. That made the source leak the second public flub in under a week. The official statement called it “a release packaging issue caused by human error, not a security breach.” Boris Cherny, the engineering lead, said no one was fired.
What 512,000 Lines Actually Contained
The leaked code held no model weights or training data. Instead, the 1,906 files made up the full orchestration layer that wraps Claude’s API and turns it into a coding agent. That makes it the most detailed public blueprint of a production AI agent harness. The design patterns are worth a study no matter which AI tools you use.
Three-Layer Self-Healing Memory
Rather than a store-everything RAG setup, Claude Code uses tiered memory across three layers: an index layer (MEMORY.md, always loaded, about 150 characters per pointer per line), topic files (loaded on demand when relevant), and transcripts (grep-only, never loaded into context directly). A background task called “autoDream” runs during idle time via forked subagents with limited tool access. It folds memories together, merges notes, drops contradictions, and turns vague insights into concrete facts. The aim: keep clean, relevant context ready for when the user returns.
Five-Stage Context Management Cascade
The main query loop (query.ts, lines 307-1,728) manages context pressure in a strict sequence:
| Stage | Function | Trigger |
|---|---|---|
| Tool result budgeting | Caps individual tool outputs | Line 379 |
| Microcompact | Trims low-value content | Line 413 |
| Context collapse | Merges related entries | Line 440 |
| Autocompact | Summarizes older conversation turns | Line 453 |
| Hard truncation | Drops content as last resort | Final fallback |
Each stage has its own rules for what to keep and what to drop. The system only moves to the next stage when the prior one can’t free enough tokens. That’s far smarter than most open-source agent frameworks, where context management usually means plain truncation.
MCP-Native Tool Architecture
Every tool runs as a Model Context Protocol (MCP) tool call. That covers bash, file read/write, grep, glob, LSP, and even Computer Use. The leaked tool list shows bash as the most flexible, built to handle file edits, git ops, and package management. Dedicated Grep and Glob tools return clean search results instead of leaning on shell commands. The source also showed about 40 tools in a plugin system , with React + Ink terminal rendering built on game-engine tricks.

Prompt Cache Optimization
A SYSTEM_PROMPT_DYNAMIC_BOUNDARY marker splits the system prompt into static and dynamic chunks. Static chunks cache across turns to skip extra token costs. Chunks marked DANGEROUS_uncachedSystemPromptSection tell devs that any change will break the cache and raise costs. The pattern bakes cost-awareness into every prompt design choice.
Subagent Execution Models
The source showed three subagent variants: fork (gets byte-identical context from the parent for cheap cache hits), teammate (its own context with peer-to-peer messaging), and worktree (git worktree isolation for parallel file edits). Each model has its own context rules, tool access, and message channels.

Hidden Features: KAIROS, Anti-Distillation, and 44 Feature Flags
The source also held unshipped features that add up to a product roadmap Anthropic never meant to ship. These are the finds that drew the most press coverage and community buzz.
KAIROS Autonomous Daemon Mode
Named over 150 times in the source, KAIROS
(after the Greek word for “the right moment”) is an unshipped background agent mode hidden behind the PROACTIVE and KAIROS feature flags. It runs as a daemon that gets periodic heartbeat prompts (“anything worth doing right now?”), watches GitHub webhooks on its own, sends push notifications, and acts without the user asking. It keeps append-only daily decision logs and lives across sessions and restarts. KAIROS also has its own tools that stock Claude Code lacks: push notification delivery, file delivery, and pull request subscription management. No open-source agent framework has shipped anything like it.
Anti-Distillation Defenses
When turned on, Claude Code sends anti_distillation: ['fake_tools'] in API requests, telling the server to inject decoy tool defs into the system prompt. The aim: trip up rivals trying to distill Anthropic’s tool-use training data. A second trick called CONNECTOR_TEXT sums up assistant replies with signed hashes on the server, so rivals can’t grab full reasoning chains by sniffing the proxy. Security researchers said both defenses could be bypassed in about an hour using MITM proxies or env-var tweaks.
Undercover Mode
A 90-line module (undercover.ts) strips Anthropic-internal tags from any commit to public open-source repos. The system prompt reads: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.” Internal codenames (Capybara, Numbat, Fennec, Tengu), Slack refs, and product IDs get scrubbed in outside repo builds. The find confirmed Anthropic uses Claude Code for stealth commits to public open-source projects. The open-source crowd pushed back hard.
44 Feature Flags
The flags compile to false in outside builds, but they show full features: 24/7 background agent ops, multi-worker Claude coordination, cron scheduling, voice command mode, Playwright browser control, agent sleep/resume cycles, and more. Internal model codenames also leaked: Capybara/Mythos (version 8, 1M context), Numbat (launch-window model), and Fennec (likely Opus 4.6). A frustration tracker that uses regex to spot user anger in real time got widely mocked: the world’s most pricey company doing sentiment work with regex.
Claw-code and the Community Explosion
The leak set off the biggest snap open-source rally in AI tooling history. Developer Sigrid Jin kicked off a clean-room Python/Rust rewrite within hours. The Wall Street Journal had profiled him as one of the world’s busiest Claude Code power users, with over 25 billion tokens burned in the prior year. Built overnight on oh-my-codex (a layer on top of OpenAI’s Codex), Claw-code rebuilt the core patterns without copying Anthropic’s source: tool system, query engine, multi-agent orchestration , and memory management.
Claw-code hit 50,000 stars in about two hours, passed 100,000 stars within a day, and broke GitHub’s prior growth records.
Anthropic’s DMCA Response and Collateral Damage
Anthropic filed DMCA takedown requests that at first led to the removal of about 8,100 GitHub repositories . The sweep caused heavy fallout. Many clean repos got wiped because they sat in the fork network tied to Anthropic’s public Claude Code repo. Devs reported DMCA notices for forks that held only skills, samples, and docs, not a line of the leaked source. Boris Cherny owned the slip, called it a mistake, and pulled most of the notices. The rest hit one repo and 96 forks that did hold the leaked source.
The clash was hard to miss. The open-source crowd noted that Anthropic was hard-pushing copyright on its own leaked code, while its AI models had trained on huge piles of public code. The clean-room rewrite also set up a fresh legal puzzle. If Anthropic claims an AI-built rewrite breaks copyright, it could weaken its own defense in training-data cases. Gergely Orosz of The Pragmatic Engineer noted that Anthropic faces a real bind on this front.
Security Fallout: CVEs, Supply-Chain Attacks, and Weaponized Repos
The source leak also turned into a security event with real-world fallout that went well past copyright fights.
CVE-2026-21852
Adversa AI
found the flaw days after the leak. It uses a design choice visible in the source. When Claude Code parses a bash command with more than 50 subcommands, it skips the heavy security check past the 50th and just asks the user to OK it. A bad CLAUDE.md file
could tell the AI to build a 50+ subcommand pipeline dressed up as a normal build. The pipeline could then steal SSH private keys, AWS credentials, GitHub tokens, and env secrets
before the trust prompt ever showed up.
The Axios Supply-Chain Attack
That same day, attackers hit the npm account of axios’s top maintainer. The Google Threat Intelligence Group
traced it to a North Korean group (UNC1069 by Google, Sapphire Sleet by Microsoft
). The crew shipped bad versions 1.14.1 and 0.30.4. The poisoned packages dropped a Remote Access Trojan via a hidden package called plain-crypto-js. The attack was live for about 2-3 hours before npm pulled them. Anyone who installed or updated Claude Code via npm between 00:21 and 03:29 UTC may have pulled the bad axios on top of the source map leak.
Malware-Laden Fake Repos
Threat actors stood up GitHub repos that claimed to host the leaked Claude Code source. They shipped infostealer malware. BleepingComputer tracked several scams that fed on dev rush to see the code. The search for “Claude Code source” became a risk on its own.
If you installed Claude Code on March 31, check lockfiles for axios 1.14.1 or 0.30.4. Treat any hit machine as fully owned: rotate credentials and reinstall the OS clean. Then audit any CLAUDE.md files in cloned repos for prompt injection payloads
.
What This Means for AI Developer Tools
The strategy damage likely beats the code damage. Feature flag names alone, KAIROS, the anti-distillation flags, internal model codenames, are product bets that rivals can now plan around. You can refactor code in a week. You can’t un-leak a roadmap.
For Anthropic, the brand hit lands at a tender time. The company reportedly pulls in $2.5 billion a year in revenue (80% from business buyers) and is gearing up for an IPO. Those buyers partly pay for the trust that their vendor’s tech is closed and locked down. Two leaks in one week dents the safety-first image that sets Anthropic apart. Fortune reported the leak “rattled” IPO plans.
For the wider ecosystem, the leak speeds up a shift already in motion. When orchestration design is no longer secret, the edge moves to model skill and user experience. The leaked permission system, sandbox setup, and multi-agent coordination patterns may turn into de facto standards. They are now the only fully shown production-grade build in the field. Open-source projects can build on tested designs rather than guessing. Several top developers on Hacker News and Reddit argued the CLI should have been open source from the start, since Google’s Gemini CLI and OpenAI’s Codex are already open.
The leak is also a case study in AI tool supply-chain risk. AI coding tools have deep dependency trees. They run with broad file and network access. They get trusted with more credentials and secrets every quarter. A hit on any package becomes an attack on every dev using the tool. Researchers at Zscaler ThreatLabz and SANS Institute published deep dives. They argue the leak shows the need for tighter sandboxing, pinned packages, and repeatable builds in AI dev tools.
Nothing says “agentic future” quite like shipping the source by mistake. Anthropic can refactor the code in a week. The trust deficit from two leaks in five days will take a lot longer to repair, with an IPO on the horizon and business buyers watching closely.
Botmonster Tech