Last updated: March 2026
Key Takeaways
- A misconfigured source map file in Anthropic's npm package exposed the entire Claude Code CLI source code — 1,900 files and 512,000+ lines of unobfuscated TypeScript — on March 31, 2026. This is Anthropic's second data exposure in five days.
- The leaked code reveals remote telemetry, 6+ killswitches, hourly settings polling, and 44 feature flags for unreleased capabilities including an always-on autonomous agent mode called KAIROS. Users of AI coding tools should understand what runs on their machines.
- A separate supply chain attack on the axios npm package occurred in the same time window, potentially injecting a Remote Access Trojan into Claude Code installations made between 00:21 and 03:29 UTC on March 31. If you updated Claude Code during that window, treat your system as compromised.
What Happened
On the morning of March 31, 2026, security researcher Chaofan Shou discovered that version 2.1.88 of the @anthropic-ai/claude-code npm package contained a 59.8 MB source map file — cli.js.map — that should never have been included in a production release.
Source map files are debugging tools that map compressed, minified production code back to the original human-readable source. Think of it as shipping the architect's complete blueprints inside the walls of a finished building. Normally, these files exist only in internal development environments. In this case, the source map was bundled directly into the public npm package and pointed to a zip archive on Anthropic's own R2 cloud storage bucket, making the full unobfuscated TypeScript codebase directly downloadable by anyone.
Within hours, the code was archived to multiple public GitHub repositories, accumulating thousands of stars and forks. Anthropic pushed a patched npm update removing the source map and deleted earlier package versions from the registry, but the code was already permanently mirrored. As of this writing, Anthropic has not issued an official public statement about the npm leak.
This is not an isolated incident. On March 26 — five days earlier — Fortune reported that a misconfigured content management system exposed details about an unreleased model called Claude Mythos, draft blog posts, an upcoming CEO event, and approximately 3,000 unpublished internal assets. Anthropic attributed that incident to "human error in the CMS configuration." Two configuration errors exposing sensitive internal information in the same week establishes a pattern, not a coincidence.
How a Single File Exposed Everything
The root cause is straightforward and worth understanding, because it applies to any software distributed through package managers — not just AI tools.
When developers build a JavaScript or TypeScript application for distribution, the build toolchain compresses and bundles the source code into a minified file that is difficult to read. Source map files reverse this process, mapping every line of the minified output back to the original source. They are essential for debugging but are never supposed to ship in production packages.
Claude Code is built using Bun's bundler, which generates source maps by default unless explicitly disabled. The error was almost certainly a missing exclusion rule — either in .npmignore or the package's files field in package.json — that would have prevented the .map file from being included in the published package. One line of configuration would have prevented the entire exposure.
The irony is difficult to overstate. Buried in the leaked source code is an entire subsystem called "Undercover Mode," specifically designed to prevent Anthropic's internal information from leaking when Claude Code contributes to public repositories. The system instructs the AI to scrub internal model codenames, project names, and any mention of being an AI from git commits and pull requests. Anthropic built an elaborate system to prevent their AI from accidentally revealing secrets — and then shipped the entire source code in a file they forgot to exclude from the package.
What the Leak Reveals
Most outlets covering this story are focused on cataloging every feature flag and easter egg. That is interesting but not what matters most. From a security and privacy standpoint, here is what users of AI coding tools should pay attention to.
Remote Telemetry and Killswitches
The leaked source reveals that Claude Code polls a remote settings endpoint on Anthropic's servers every hour. This polling mechanism can push configuration changes to running instances, including activating or deactivating features via GrowthBook feature flags — without requiring a user-initiated update or explicit consent.
The code contains six or more remote killswitches that can force specific behaviors: bypassing permission prompts, enabling or disabling fast mode, toggling voice mode, controlling analytics collection, and in some cases forcing the application to exit entirely. If a remote configuration change is classified as "dangerous," a blocking dialog appears — but rejecting it causes the application to quit.
This is not unusual for cloud-connected software. But Claude Code is a tool that requests filesystem access, terminal command execution, and full codebase read/write privileges on your development machine. Understanding that it also maintains a persistent remote control channel to Anthropic's servers is relevant context for anyone running it on sensitive projects.
Undercover Mode: AI Contributions Hidden in Open Source
The source confirms that Anthropic employees actively use Claude Code to contribute to public open-source repositories. When operating in this mode, the system prompt explicitly instructs Claude to hide all evidence that an AI was involved. The instructions state: commit messages and PR descriptions must not contain internal model codenames, unreleased version numbers, internal project names, the phrase "Claude Code," or any mention that the contributor is an AI.
Internal model codenames revealed in the source include Capybara (a Claude 4.6 variant), Fennec (Opus 4.6), and the unreleased Numbat. The codename Tengu appears hundreds of times as a prefix for feature flags and analytics events and is likely Claude Code's internal project codename.
44 Feature Flags and the Road Ahead
The most forward-looking discovery is the sheer volume of unreleased functionality already built and waiting behind compile-time feature flags. The most significant:
KAIROS is an always-on autonomous agent mode — a persistent background daemon that does not wait for user input. It watches, logs, and proactively acts on observations with a 15-second blocking budget (any action that would interrupt the user for longer gets deferred). KAIROS maintains append-only daily log files and includes a "dreaming" system called autoDream that performs memory consolidation while the user is idle — merging observations, removing contradictions, and converting vague notes into concrete facts. The flag appears over 150 times in the source.
COORDINATOR MODE enables one Claude agent to spawn and manage multiple worker agents in parallel, with each worker operating in its own context. This is multi-agent orchestration built directly into the CLI.
VOICE_MODE is a push-to-talk voice interface that is fully implemented but gated behind a feature flag.
BUDDY is a Tamagotchi-style terminal pet with 18 species (including capybara, axolotl, and ghost), rarity tiers from common to 1% legendary, shiny variants, and five stats: DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK. The species is determined by a hash of your user ID, meaning the same user always gets the same buddy. Planned teaser rollout was April 1–7, with a full launch in May. Whether this is genuine or an elaborate April Fools' artifact is debated, though the engineering depth suggests it is real.
The Concurrent Supply Chain Attack
Separate from the source code exposure but critically relevant: VentureBeat reported that a supply chain attack on the widely-used axios npm package occurred in the same time window as the leak. Anyone who installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC may have pulled in a malicious version of axios (1.14.1 or 0.30.4) containing a Remote Access Trojan.
If you updated Claude Code during that window, you should immediately check your project lock files (package-lock.json, yarn.lock, or bun.lockb) for these specific axios versions or a dependency called plain-crypto-js. If found, treat the machine as fully compromised: rotate all credentials, API keys, and SSH keys, and perform a clean system reinstall.
Anthropic has designated its native installer as the recommended installation method going forward, specifically because it bypasses the npm dependency chain entirely. Users still on npm should uninstall version 2.1.88 and revert to 2.1.86.
This is a concrete example of why supply chain security matters for any AI tool you install. The ClawHub marketplace compromise earlier this year — where hundreds of malicious skills were found stealing credentials — demonstrated the same class of risk in a different ecosystem. Any tool that installs packages from public registries inherits the security posture of that entire registry.
What This Means for Users of AI Coding Tools
The source code leak itself does not compromise Claude's core AI capabilities. No model weights, training data, or inference infrastructure were exposed. The Claude models that power Claude Code remain on Anthropic's servers. As multiple commentators noted, knowing the source code of the slot machine does not help you build the casino — the real intellectual property is the model.
However, the leak does raise legitimate questions about operational security at a company whose products request deep access to your development environment. If you use Claude Code — or any AI coding agent that operates on your local machine — here are practical steps to limit your exposure:
Run AI tools in isolation. Use a dedicated development machine or virtual machine for AI-assisted coding on sensitive projects. Do not give AI coding tools access to production credentials, SSH keys, or secrets that live in your home directory.
Monitor outbound network traffic. AI coding tools make network calls — to API endpoints, telemetry servers, and update mechanisms. A Pi-hole instance on your network gives you full visibility into every DNS query these tools generate. If an AI tool starts phoning home to unexpected domains, you will see it in the query log.
Consider local models for sensitive work. Tools like Ollama, LM Studio, and llama.cpp let you run capable AI models entirely on your own hardware with zero cloud dependency and zero telemetry. The tradeoff is performance — local models are not yet at parity with Claude or GPT-5 on the hardest tasks — but for routine coding, summarization, and document work, they handle 70–80% of daily needs. Our guide to the best hardware for running local AI covers GPU, mini PC, and Apple Silicon options across every budget tier.
Control your network boundary. The same principle behind owning your own modem instead of renting from your ISP applies here: when you control the hardware and the network, you control what leaves your perimeter. Open-source router firmware with DNS filtering, VPN routing, and VLAN segmentation gives you layers of defense that no AI tool's privacy policy can match.
The Bigger Picture
Two data exposures in five days — one from a misconfigured CMS, one from a misconfigured npm package — suggest Anthropic's operational security practices are not keeping pace with the company's growth. For a company reporting an estimated $19 billion annualized revenue run rate and whose coding tool alone generates an estimated $2.5 billion in annual recurring revenue, these are not junior-engineer mistakes. They are systemic configuration management failures.
The most consequential long-term takeaway from the leak may not be any specific feature or secret. It is the confirmation that AI coding tools are becoming dramatically more autonomous. KAIROS — a persistent background agent that proactively acts, manages its own memory, and consolidates observations while you sleep — is not a research prototype. It is compiled, feature-gated code sitting in a production codebase, waiting to be turned on. Multi-agent coordination, voice interfaces, and workflow scripting are all in the same state: built, tested, and gated behind flags.
The question for every user of AI-powered development tools is not whether these capabilities are coming. It is whether you trust the companies deploying them to manage the security of their own infrastructure — because you are granting these tools access to yours.
Frequently Asked Questions
What happened with the Claude Code source code leak?
On March 31, 2026, security researcher Chaofan Shou discovered that Anthropic accidentally published a source map file inside the Claude Code npm package (version 2.1.88). This file contained the complete, unobfuscated TypeScript source code for Claude Code — 1,900 files and 512,000+ lines — which was directly downloadable from Anthropic's cloud storage. Anthropic removed the file and deleted older package versions, but the code had already been mirrored to multiple public GitHub repositories.
Is my data at risk from the Claude Code leak?
The leak exposed Claude Code's client-side CLI source code, not Anthropic's AI models, training data, customer data, or server infrastructure. Your conversations with Claude and your API keys were not exposed by this leak. However, if you installed or updated Claude Code via npm between 00:21 and 03:29 UTC on March 31, 2026, a separate axios supply chain attack may have introduced malicious code to your system. Check your lock files for axios versions 1.14.1 or 0.30.4 immediately.
What is Claude Code's Undercover Mode?
Undercover Mode is a system within Claude Code that instructs the AI to hide all evidence of being an AI when contributing to public open-source repositories on behalf of Anthropic employees. It scrubs internal model codenames, project names, and AI attribution from git commits and pull request descriptions. The system's instructions explicitly state: "Do not blow your cover."
What unreleased features were found in the Claude Code leak?
The source code contains 44 compile-time feature flags for unreleased capabilities. The most significant include KAIROS (a persistent, always-on autonomous agent mode with background memory consolidation), COORDINATOR MODE (multi-agent orchestration), VOICE_MODE (push-to-talk voice interface), ULTRAPLAN (30-minute remote planning sessions), and BUDDY (a Tamagotchi-style terminal pet with 18 species and rarity tiers).
Is it safe to keep using Claude Code after the leak?
The source code exposure does not affect Claude Code's functionality or the security of the Claude AI model itself. Anthropic has patched the npm package. However, users should ensure they are not running version 2.1.88, should check for the malicious axios versions mentioned above, and should consider using Anthropic's native installer instead of npm to avoid future supply chain risks. For sensitive projects, consider running AI models locally instead of relying on cloud-connected tools.
What is KAIROS in Claude Code?
KAIROS is an unreleased feature flag that appears over 150 times in the leaked Claude Code source. It represents a persistent, always-on autonomous agent mode — a daemon that runs in the background, monitors your project, maintains daily observation logs, and proactively takes actions it thinks are helpful. It includes a "dreaming" system that consolidates memory while the user is idle. KAIROS is fully built but gated behind a compile-time flag and is not present in external releases.
How can I run AI coding tools more securely?
Use a dedicated machine or VM for AI-assisted development on sensitive projects. Monitor network traffic with a DNS-level tool like Pi-hole to see what your AI tools are connecting to. For work that must stay completely private, run open-weight models locally using Ollama or LM Studio — your data never leaves your machine, there is no telemetry, and there is no remote killswitch.

