All articles
Cybersecurity··12 min read

What the Claude Code Source Leak Teaches Every Team Shipping AI Tools About Build Pipeline Security

A 59.8 MB source map file exposed 512,000 lines of proprietary code. The fix is a five-point CI/CD checklist any team can implement this week.

On March 31, 2026, Anthropic published version 2.1.88 of the @anthropic-ai/claude-code package on npm, the public registry where JavaScript software is distributed. Bundled inside was a 59.8 MB JavaScript source map file - a debugging artifact that maps compiled code back to its original, readable source - that contained the complete, unminified TypeScript source code of Claude Code - 512,000 lines across 1,906 files. Within hours, the codebase had been downloaded from Anthropic's own Cloudflare R2 storage bucket, mirrored to GitHub, and forked tens of thousands of times.

The cause was not a sophisticated attack. It was a build configuration oversight: Bun, the JavaScript runtime Anthropic uses as its bundler (the tool that compiles and packages code for distribution), generates source maps by default. The release pipeline did not disable that default, and the package's file configuration did not exclude .map files. A nearly identical leak had occurred with an earlier Claude Code version in February 2025, making this the second such incident in 13 months.

This is worth studying not because it is unusual, but because it is ordinary. The same class of build pipeline oversight can happen to any team shipping compiled JavaScript or TypeScript - and for organizations building AI tools, the stakes of accidental source exposure are higher than they have ever been.

What Was Actually Exposed - and Why It Matters for Competitive Strategy

The source map's sourcesContent array contained the original TypeScript files verbatim. Researchers who analyzed the code published details of Claude Code's internal architecture, including its self-healing memory system, multi-agent orchestration layer, tool execution framework, and query engine for LLM API calls.

More consequentially for Anthropic's competitive position, the leak exposed 44 unreleased feature flags. The most discussed was KAIROS - referenced over 150 times in the source - which implements an autonomous daemon mode where Claude Code operates as an always-on background agent performing "memory consolidation" while the user is idle. Another flag, ULTRAPLAN, offloads complex planning tasks to a remote container runtime running Opus with up to 30 minutes of compute time.

The code also revealed internal benchmark data, including a 29-30% false claims rate in the current version - a regression from 16.7% in an earlier version - along with an "assertiveness counterweight" designed to prevent overly aggressive refactoring. For competitors, these metrics provide a precise benchmark of the current ceiling for agentic coding performance and the specific weaknesses Anthropic is actively working to solve.

For business leaders, the lesson is not about Anthropic's specific metrics. It is about what a single build artifact can reveal: product roadmap, competitive benchmarks, security architecture, and unreleased capabilities. Every organization shipping compiled code is one misconfigured bundler setting away from a similar exposure.

The Build Pipeline Gap: Why This Class of Mistake Keeps Recurring

Source map inclusion is one of the most common build artifact oversights in the JavaScript ecosystem, and the pattern behind it is straightforward. Modern bundlers optimize for developer experience, which means debug-friendly defaults. Bun generates source maps by default. Webpack's default devtool setting includes source maps. Unless the production build configuration explicitly disables them, they ship.

The second factor is that traditional security tooling does not flag this class of exposure. Vulnerability scanners check for known CVEs in dependencies. Static analysis tools check for code quality issues. Neither checks whether the published package contains artifacts that were never intended for distribution. The 59.8 MB source map file was an order of magnitude larger than the actual bundle - a signal that would be obvious to a human reviewer but invisible to automated security scanning.

Teams that handle this well treat the build-to-publish pipeline as a security boundary, not just a convenience layer. In practice, this means three things: the bundler configuration for production is explicitly locked down (source maps off, debug symbols stripped), the package manifest uses an allowlist rather than a denylist for included files, and the CI/CD pipeline includes a validation step that fails the build if unexpected artifacts appear in the publish payload.

Build Pipeline Security Boundary Diagram showing five checkpoints in a secure build-to-publish pipeline that would have prevented the Claude Code source map leak. 5 Checkpoints That Prevent Build Artifact Leaks Any single checkpoint would have stopped the Claude Code exposure 1. BUNDLER CONFIG Source maps explicitly disabled in production build Bun: --sourcemap=none | Webpack: devtool: false | esbuild: omit --sourcemap 2. PACKAGE ALLOWLIST package.json "files" field allowlists only intended artifacts Safer than .npmignore denylist - new file types cannot accidentally slip through 3. CI/CD ARTIFACT SCAN npm pack --dry-run validated in CI - build fails if .map files detected Automated gate that catches what human review misses 4. SIZE ANOMALY DETECTION Alert on artifact size anomalies 59.8 MB source map vs ~5 MB bundle = obvious signal 5. PRE-PUBLISH REVIEW Second engineer reviews publish payload Human gate for high-stakes releases
FIGURE 1 - Five build pipeline checkpoints. Any single checkpoint would have prevented the Claude Code source map exposure.

The Same Day, a Different Threat: Why Build Security Requires Both Hygiene and Verification

On the same day as the Claude Code leak, a separate and far more dangerous event occurred in the npm ecosystem. The axios package - one of the most widely used HTTP clients in JavaScript with 83 million weekly downloads - was compromised in an intentional supply chain attack. Microsoft Threat Intelligence attributed the attack to Sapphire Sleet, a North Korean state actor.

The attacker published malicious versions of axios (1.14.1 and 0.30.4) that included a dependency containing a cross-platform remote access trojan. The malicious versions were live for roughly three hours before being identified and removed. Unlike the Claude Code leak, this was not an accident - it was a deliberate attack using credentials compromised from a maintainer account.

The coincidence of these two events on a single day illustrates the dual nature of build pipeline risk. Organizations face both accidental exposures from misconfigured build tooling and deliberate attacks from sophisticated adversaries targeting the same infrastructure. The controls overlap significantly: dependency verification, artifact validation, and publish-time review processes address both vectors.

For teams that covered the LiteLLM supply chain attack just days earlier (as we did in our analysis of that incident), the pattern is unmistakable. The npm ecosystem experienced three major security events in a single week: the LiteLLM PyPI compromise on March 24, the Claude Code source map leak on March 31, and the axios supply chain attack on March 31. Each exploited a different weakness - compromised publishing credentials, build configuration oversight, and maintainer account compromise, respectively - but all three could have been mitigated by the same class of release engineering controls.

The Regulatory Dimension: When Build Oversights Become Governance Questions

Within 48 hours of the leak, Rep. Josh Gottheimer (D-N.J.) sent a formal letter to Anthropic CEO Dario Amodei raising national security concerns. Gottheimer's letter focused on three issues: the repeated nature of the exposure (this was the second leak in 13 months), Anthropic's recent decision to narrow its internal safety policy pledge, and the risk that adversarial state actors could use the exposed code to identify vulnerabilities or replicate capabilities.

Anthropic's chief commercial officer Paul Smith characterized the leak as the result of "process errors" related to the company's fast product release cycle, stating it was "absolutely not breaches or hacks." The company filed copyright takedown requests to remove the code from GitHub, though it later acknowledged the takedowns had impacted more repositories than intended and scaled them back.

The regulatory attention is significant because it signals a shift in how build pipeline incidents are perceived. A source map leak might historically have been treated as an embarrassing but low-consequence mistake. When the leaked code contains AI system internals - permission models, security validators, benchmark data, and unreleased capabilities - the exposure intersects with competitive intelligence, national security, and AI governance conversations that are already under intense political scrutiny.

For organizations shipping AI tools, this means build pipeline security is no longer purely an engineering concern. It is a governance concern with potential regulatory implications. The teams that position themselves well here are those that can demonstrate systematic controls - not just good intentions - around their release engineering process.

Accidental vs. Intentional Build Pipeline Threats Diagram comparing accidental build artifact exposure with intentional supply chain attacks, showing that the same set of controls addresses both vectors. Two Threat Vectors, One Set of Controls March 31, 2026: Both vectors materialized on the same day ACCIDENTAL EXPOSURE Claude Code Source Map Leak Build misconfiguration ships debug artifacts 512K lines of source code exposed 59.8 MB source map in npm package Second incident in 13 months INTENTIONAL ATTACK Axios Supply Chain Compromise Compromised account publishes malicious code Cross-platform RAT delivered to users 83M weekly downloads at risk Attributed to North Korean state actor SHARED CONTROLS Artifact validation Scan publish payload before release Dependency verification Pin hashes, verify provenance Publish-time review Second engineer approves release Size monitoring Alert on anomalous package sizes DEFENSE IN DEPTH: THE SAME CONTROLS ADDRESS BOTH ACCIDENTAL AND INTENTIONAL THREATS
FIGURE 2 - March 31 saw both an accidental leak and an intentional attack. The same release engineering controls mitigate both.

A Practical Framework: The Five-Point Release Engineering Checklist

The Claude Code leak was preventable at multiple points. Any one of the following controls, implemented correctly, would have stopped the exposure. Implementing all five creates defense in depth that accounts for human error at any single layer.

1. Lock down bundler defaults for production. Every bundler has settings that are appropriate for development but dangerous in production. Source maps are the most common example, but the principle extends to debug logging, development-only dependencies, and verbose error messages. The production build configuration should be a separate, explicitly maintained artifact - not the development configuration with a few flags toggled. For Bun specifically, this means setting --sourcemap=none in the production build command. For Webpack, devtool: false. For esbuild, omitting the --sourcemap flag entirely.

2. Use an allowlist, not a denylist, for package contents. The files field in package.json specifies exactly which files should be included in the published package. This is fundamentally safer than .npmignore, which specifies what to exclude. With a denylist, a new file type (like .map) can slip through if nobody remembers to add it to the exclusion list. With an allowlist, only explicitly listed files are included - everything else is excluded by default.

3. Add an automated artifact scan to CI/CD (your continuous integration and deployment pipeline). Run npm pack --dry-run in the CI pipeline and programmatically check the output for files that should never appear in a published package: .map files, .env files, test fixtures, internal documentation, and any file above a configurable size threshold. Fail the build if any are detected. This is the automated equivalent of a pre-flight checklist - it catches the mistakes that slip past human attention.

4. Monitor artifact size for anomalies. The Claude Code source map was 59.8 MB - roughly 12 times the size of the actual bundle. A simple CI check that compares the current package size against the previous release and alerts on significant deviations would have flagged this immediately. This does not require sophisticated tooling - a shell script that compares file sizes and fails on a percentage threshold is sufficient.

5. Require a second engineer to review the publish payload. For high-stakes packages - anything with significant user counts, anything that handles credentials, anything that contains proprietary IP - the publish step should require a second person to review what is actually being published. Not the code diff, but the literal contents of the package that will be uploaded to the registry. This human gate catches the category of mistakes that automated checks have not been configured to detect.

What This Means for Organizations Shipping AI Tools

AI tools carry a uniquely concentrated payload of sensitive intellectual property. A traditional web application's source code reveals business logic and architecture. An AI tool's source code reveals that plus model interaction patterns, prompt engineering strategies, benchmark data, safety mechanisms, and competitive positioning. The Claude Code leak exposed all of these in a single file.

This concentration of value means that AI-focused organizations need to treat their build pipeline with the same rigor they apply to their model weights and training data. In practice, the organizations that have publicly described handling this well share three characteristics:

They separate build security from application security. Most security programs focus on the code that runs in production - vulnerability scanning, penetration testing, access controls. Build pipeline security - what artifacts get produced, what gets published, what safeguards exist between "code compiles" and "code ships" - is often treated as a DevOps convenience rather than a security boundary. The Claude Code incident demonstrates that this gap has material consequences.

They audit their bundler defaults on every toolchain change. Anthropic's adoption of Bun introduced a new default behavior (source maps enabled) that their existing release process did not account for. Every time a team changes bundlers, runtimes, or build tools, the production configuration should be re-audited for debug-friendly defaults that are inappropriate for distribution.

They treat artifact size as a security signal. A 59.8 MB source map file in a package whose bundle is roughly 5 MB is a 12x size anomaly. Simple threshold-based monitoring in the CI/CD pipeline would have caught this immediately. This is not a sophisticated detection technique - it is the build pipeline equivalent of noticing that the file is suspiciously large before putting it in the envelope.

Five Questions for Your Next Security Review

The following questions translate the lessons from this incident into a practical assessment framework. Each maps to a specific control that would have prevented the Claude Code exposure.

1. Does our production build configuration explicitly disable source maps and debug artifacts? If the answer is "I think so" or "it should," verify it. Run the production build and inspect the output directory for .map files, verbose error handlers, and development-only dependencies. The Claude Code leak happened because nobody verified that the production configuration had been updated after a toolchain change.

2. Does our package manifest use an allowlist or a denylist? If you are using .npmignore (or its equivalent in other ecosystems), consider switching to the files field in package.json. An allowlist ensures that only explicitly listed artifacts are published. A denylist requires you to anticipate every file type that should be excluded - and a single omission creates an exposure.

3. Does our CI/CD pipeline validate the publish payload before release? If there is no automated step that inspects the actual contents of the package being published - not the source code, but the final artifact - add one. npm pack --dry-run shows exactly what will be uploaded. A script that checks this output for prohibited file patterns takes less than an hour to implement and would have prevented both Claude Code incidents.

4. Would we detect a 10x size anomaly in a published artifact? If your CI/CD pipeline does not track artifact sizes between releases and alert on significant deviations, add a comparison step. The implementation is straightforward: store the previous release's artifact sizes and fail or alert if the current build deviates by more than a configurable percentage.

5. Does a second person review the publish payload for high-value packages? For packages that contain proprietary IP, handle credentials, or serve large user bases, the publish step should require approval from someone who did not perform the build. This human checkpoint catches the category of issues that automated checks have not been configured to detect yet.

The Claude Code source map leak was not a sophisticated attack or an exotic failure mode. It was a build configuration oversight - the kind that any team shipping compiled code could replicate. The controls that prevent it are well-understood, implementable in a single sprint, and effective against both accidental exposures and intentional supply chain attacks. The question is not whether your organization knows about these controls. It is whether they are implemented, tested, and enforced in your release pipeline today.

Frequently Asked Questions

What exactly was leaked in the Claude Code incident?

On March 31, 2026, Anthropic published version 2.1.88 of the @anthropic-ai/claude-code npm package with a 59.8 MB source map file (cli.js.map) that contained the full, unminified TypeScript source code - approximately 512,000 lines across 1,906 files. The source map included the sourcesContent array, which means the original source files were embedded verbatim. This exposed 44 unreleased feature flags, internal benchmark data, the complete permission model, every bash security validator, and references to unannounced models. No customer data or credentials were exposed.

How did this happen if Anthropic has sophisticated security teams?

The root cause was a build configuration oversight, not a security breach. Anthropic uses Bun as their JavaScript bundler, and Bun generates source maps by default. The release team did not disable source map generation for the production build, and the .npmignore file and package.json files field did not exclude .map files. This is a common pattern with newer build tools whose defaults prioritize developer experience over production security. The same type of oversight had occurred with an earlier Claude Code version in February 2025, making this the second incident in 13 months.

Could this happen to our organization?

Yes, and that is precisely why this incident is instructive. Source map inclusion is one of the most common build artifact oversights in JavaScript and TypeScript projects. If your team publishes npm packages, ships browser bundles, or deploys compiled applications, your build pipeline could be including debug artifacts without anyone noticing. The five-point checklist in this article - covering bundler configuration, allowlist-based packaging, CI/CD artifact scanning, artifact size monitoring, and pre-publish review - addresses this risk systematically.

What is the business risk of a source code leak like this?

The direct risks include competitive intelligence exposure (competitors can study your architecture, benchmark data, and product roadmap), security vulnerability discovery (attackers can analyze your security validators and permission models for weaknesses), intellectual property loss (the code itself represents significant R&D investment), and regulatory scrutiny (Rep. Gottheimer sent a formal letter to Anthropic's CEO raising national security concerns). For Anthropic specifically, the leak revealed internal benchmark data showing areas where the product was still being improved, which competitors can use for positioning.

What is a source map and why is it dangerous?

A source map is a file that maps compiled, minified, or bundled code back to its original source code. Developers use them during debugging so they can read error messages in terms of their original code rather than the transformed output. The danger is that the sourcesContent field in a source map contains the actual original source files verbatim. When a source map is accidentally included in a published package, it is equivalent to publishing the original source code. Some bundlers, including Bun, generate source maps by default, which means teams need to explicitly opt out for production builds.

How does this relate to the axios supply chain attack that happened the same day?

On the same day as the Claude Code leak, the axios npm package (83 million weekly downloads) was compromised in a separate, intentional supply chain attack attributed to North Korean state actor Sapphire Sleet. The attacker published malicious versions containing a remote access trojan. The coincidence underscores a broader point: the npm ecosystem faces both accidental exposures from build misconfigurations and deliberate attacks from sophisticated adversaries. Organizations need controls that address both vectors - build artifact hygiene for accidental leaks and dependency verification for intentional compromises.

Code Atelier · NYC

Ready to get agent-ready before your competitors do?

Let's talk