Welcome to a special Fieldnotes. I know we are The Privacy Design Lab, and what I’m about to discuss isn’t a privacy issue per se. It’s a security issue. But, if you know anything about my feeling about privacy, you know that security ties extremely closely to privacy, and I truly think this topic is an important one for our audience. That said, this is not our usual topical post, but rather a reaction to the recent Claude Code (Anthropic’s closed-source coding agent) source leak. We haven’t included a micro tool in this exploration of what happened but keep reading for some tips on what to do if your enterprise uses Claude Copilot.

On March 31, 2026, Anthropic had the other kind of security event, the far more common type. It shipped the wrong artifact during a public release of Claude Code. This release included a JavaScript source map (.map) file that could be used to reconstruct a huge chunk of the underlying TypeScript codebase. Within hours, mirrors and forks proliferated, and analysis threads (and hot takes) sprinted ahead of official communications from Anthropic. Before Anthropic could even get its collective heads around the situation, third parties were off and running.  

This type of packaging leak implies process failure, rather than perimeter failure. This leak contained build pipelines, release gates, and a map to what Anthropic’s release was publishing.  It’s an old-school software lesson landing in a very new-school context. It might be old school, but it’s not niche, and it’s not a minor issue. In Anthropic’s own February 2026 funding announcement, the company said Claude Code’s run-rate revenue exceeded $2.5B, and that enterprise use represented over half of such revenue. Those numbers that make this incident feel less like developer gossip and more like a board-level event. 

1) What happened based on the cleanest timeline we have so far

What happened?

A Claude Code release to the public npm registry briefly included a large source map file (reported as cli.js.map, ~60MB) that exposed enough embedded source content to reconstruct a very large portion of the application’s internal codebase, described as roughly 500K+ lines of code across about 1,900 files. Whoa, nelly!

How was it found?

Reporting indicates the issue was first spotted publicly by a single individual, after which others rapidly replicated the extraction and began mirroring the public release, including the map file and the files pulled from the map file. 

How did Anthropic react?

Multiple outlets have quoted Anthropic as describing the event as a release packaging issue caused by human error, and Anthropic stated that no sensitive customer data or credentials were exposed. That’s all well and good, and I’m truly glad that no sensitive customer data was included, but it’s just the beginning of their problems from a security standpoint.

What happened next?

By the time the package was patched, the code had already been copied and redistributed, including on GitHub.  Reporting also describes widespread forking/mirroring and efforts to limit distribution through takedown requests. 

Two small but important clarifiers:

  • The leak was described as relating to Claude Code’s internal application/source, not the underlying frontier model weights. In other words, this is not “Claude’s brain,” but it is a detailed look at one of Anthropic’s most important product wrappers and workflows. 

  • The thing that made this incident important isn’t just that code was released but how readable that code was to third parties because of the inclusion of the source map.

2) The security implications and why enterprise customers should

Source maps are a debugging convenience but also an information exposure surface

A source map is (roughly) a JSON mapping between transformed/minified JavaScript and the original source, used to make debugging less painful. In the best case, it helps your engineers trace a stack trace back to real file names and real lines of code. In the worst case, it helps everyone else do that exact same exercise. 

OWASP’s Web Security Testing Guide calls out source maps explicitly as a form of information leakage precisely because they can make code far more human-readable and can make it easier for attackers to find vulnerabilities or sensitive information in the client-side code. 

The lesson… you don’t have to leak customer data to create customer risk.

Risk #1: Accelerated vulnerability discovery for Anthropic, and for customers who depend on the tool

When internal code becomes public, the cost of security research drops dramatically. Even if the leak contains no API keys or passwords, it may still expose:

  • architectural patterns (i.e., what talks to what)

  • assumptions (e.g., what is “trusted,” what is “sanitized,” what is “validated”)

  • feature flags and unshipped pathways

  • error-handling logic and edge cases

  • telemetry and logging boundaries that can be a goldmine for understanding where sensitive data might surface

This doesn’t automatically mean that attackers can exploit Anthropic’s code, but it compresses the timeline significantly. Researchers don’t have to reverse-engineer behavior from black-box observation alone; they can read it. For enterprise customers, this can translate into practical questions like:

  • Is there a known vulnerability in the current Claude Code release that becomes easier to exploit now?

  • Are there insecure defaults or debug behaviors we need to disable?

  • Do we need compensating controls on systems Claude Code can access?

They are the same and very real questions customers ask after any security-relevant disclosure—especially for tools that touch code, CI/CD, repositories, credentials, and internal systems.

Risk #2: Supply-chain and impersonation dynamics

Once there’s a high-profile leak, you get a predictable second wave of lookalike repos, unofficial forks, typosquats, and “fixed” builds. Some are benign. Others aren’t.

This is not unique to Anthropic. It’s a common pattern, but it’s especially sharp for developer tools distributed through registries and package managers because the distance between a download and execution is short. Even without asserting any specific malicious campaign, enterprise security teams should assume:

  • employees will stumble into unofficial mirrors

  • so-called helpful third parties will publish modified builds

  • attackers will use the moment to seed impersonation artifacts

This is why vendor incidents often create downstream customer work even when no specific customer data is leaked.

Risk #3: Release discipline becomes part of the security story

Anthropic framed the event as human error in release packaging. Perhaps that’s less of an issue than a malicious actor but running afoul of best practices for implementation isn’t a good look either. Secure software development guidance, such as NIST’s Secure Software Development Framework (SSDF), emphasizes implementing practices that reduce vulnerabilities and prevent recurrences. A packaging error that exposes internal code is, in SSDF language, a sign you need stronger controls around build/release processes and verification steps. 

This is an unfortunate bungle for Anthropic’s branding as a safety-first AI company. Shipping a debug artifact to a public registry belies the narrative with an unfortunate operational reality.

What enterprise customers should do

If your org uses Claude Code or evaluated it recently, a reasonable response is to treat this as vendor incident hygiene:

  1. Confirm your versions and apply vendor guidance/patches.

  2. Rotate credentials that the tool can access (especially tokens for Git providers, CI, artifact registries), if your internal risk posture warrants it. Do this even if the vendor says no credentials were exposed.

  3. Block unofficial builds in your environment via allowlisting, internal package proxies, dev tooling policies.

  4. Ask for an incident write-up, including root cause, impacted versions, mitigations, and what process changes prevent recurrence.

  5. Document your risk decision (particularly if you’re a regulated enterprise) because auditors care less about whether you were perfect and more about whether you were competent and consistent.

None of these steps assume Anthropic is untrustworthy. They’re the standard playbook when a critical vendor has a security-relevant mishap.

3) The intellectual property story: copyright, trade secrets, and the AI authorship complication

Security incidents end. IP issues linger. A source leak puts pressure on two legal/strategic buckets:

Takedown mechanics on platforms like GitHub often run through the Digital Millenium Copyright Act (DMCA). 17 U.S. § 512 is the foundation of the notice-and-takedown process. It creates a safe harbor for online service providers that remove allegedly infringing material after proper notice. This is why you’ll see companies submit takedown notices quickly. It’s one of the fastest levers available to limit distribution, even if it’s imperfect in practice. I’ve even managed to use it for clients who have had social media accounts hacked where the account continued to use the intellectual property of my client. With those kinds of claims, you can get content pulled within 24 hours.

But… DMCA takedowns are fundamentally about copyright. Which leads to the second bucket.

Bucket B: Trade secret protection which really matters for a closed tool

A lot of “secret sauce” in software isn’t patented. A lot of it can’t be patented, and even though it might be copyrighted, most companies don’t want to publicly paste their entire codebase into the USPTO for the copyright. So, that leaves trade secrets as the best method for intellectual property protection. Under widely used definitions (e.g., UTSA-style definitions summarized by Cornell), a trade secret is information that derives independent economic value from not being generally known and is subject to reasonable efforts to maintain secrecy. 

So Anthropic’s key question isn’t only “Can we get this removed with a DMCA takedown request?” It’s also “did we take reasonable measures to keep this secret, and are we acting quickly enough now to preserve claimed trade secret status?”

That tends to hinge on facts decided by a court, including access controls, employee policies, repo security, release gates, incident response, and the speed and seriousness of remediation.

Where the AI authorship question gets mushy

The biggest legal gray zone Anthropic is probably stepping into next is…what if Claude Code was coded using AI? There is some suggestion that this might be the case, since Anthropic specifically took steps to ensure that it any code generated by AI was not marked as such.

This might be backfiring on them now that we can all see because AI-generated material isn’t copyrightable. In the U.S., the Copyright Office has been very clear that human authorship is required for copyright protection, and it will not register works created by a machine “without any creative input or intervention from a human author.”  Courts have reinforced this baseline in the context of AI-generated art; for example, the Supreme Court recently declined to hear a dispute that would have challenged the “human authorship required” rule, leaving lower-court decisions intact. 

However, there’s a practical nuance:

  • “AI-assisted” creation can still result in copyrightable output if there’s sufficient human authorship (selection, arrangement, revision, and creative control), even if some material is generated.

  • “AI-only” output (where humans contribute essentially no creative authorship) is far less likely to be protected.

So, what does that mean for Anthropic?

  • If parts of Claude Code were largely generated by AI with minimal human creative contribution, copyright claims over those specific fragments could be narrower than people assume. 

  • But the overall codebase may still contain substantial human authorship, and software often reflects extensive human structure, integration decisions, and revision, even in AI-assisted development. 

  • And importantly, trade secret law does not require human authorship in the same way copyright does. Trade secret protection is about secrecy and economic value, not “original expression.”

The interesting strategic risk is that Anthropic, rather than losing all protection, ends up in a situation where enforcing its intellectual property becomes messier. Messiness is expensive—especially when code is already proliferating. I’m not going to pretend I don’t have a small feeling of schadenfreude about this. Full disclosure, I authored a book that is part of the Bartz et al v. Anthropic case currently set for a final settlement hearing. The irony that Anthropic used a bunch of other people’s intellectual property and now might have some trouble protecting its own is too perfect.

4) What the future could hold for Anthropic after this

No one outside Anthropic knows the real internal postmortem yet, but we can sketch plausible paths.

Scenario 1: Contained incident, painful week, then normal life

This is the most likely scenario. Anthropic has already publicly characterized the issue as release packaging human error and said no customer data/credentials were exposed. If that holds, many enterprise customers will treat this as serious-but-manageable and take the steps outlined above, e.g., update versions, ask for a write-up, move on. 

The operational output here is a classic process tightening lesson (e.g., tighter build/release controls, stronger checks for debug artifacts, and improved governance over what goes to public registries—i.e., secure software development hygiene). 

Scenario 2: Competitive acceleration where the leak becomes a roadmap donation

Axios summarized a real strategic concern, and that is that exposed code can provide a blueprint for competitors and the broader ecosystem to study product choices and replicate patterns faster. 

Even if cloning isn’t straightforward, disclosure can reduce R&D cost for others. In AI tooling, where “wrapper” quality such as agent orchestration, permissions models, UX, integrations, is a major differentiator, losing secrecy over those implementation choices may have consequences.

Scenario 3: The incident becomes a trust tax

Anthropic sells into enterprises. Enterprises are increasingly running AI vendor diligence like they run cloud diligence with secure SDLC, supply-chain controls, incident response maturity, and evidence. A source leak doesn’t automatically mean vendor blacklisting, but it may add friction:

  • more customer questionnaires

  • more procurement/security escalation

  • longer cycles for regulated customers

  • a greater need to demonstrate secure development practices, not just talk about safety research

Given how economically significant Claude Code appears to be for Anthropic, the trust tax is nontrivial, even if it’s temporary. 

Scenario 4: The IP fight becomes a sideshow

Once code is widely mirrored, full containment is unrealistic. The practical goal becomes to slow the spread, protect customers, and keep shipping. That might push Anthropic toward:

  • tightening legal and technical controls while accepting some leakage

  • focusing on model/service-side advantages that aren’t replicated by client code alone

  • emphasizing the “service” moat (e.g., infrastructure, model access, safety layers, enterprise integrations) rather than the secrecy of a CLI wrapper

A closing note: the boring lesson is the real one

Claude Code’s leak is a modern story with a very old moral, which is that the riskiest part of your system is often your workflow.

A source map is not sophisticated. A packaging error is not science fiction. But when you’re distributing an agentic tool that sits close to code and credentials (and it’s generating billions in run-rate revenue), these boring mistakes become front-page events and even more of a reason to treat release engineering, artifact hygiene, and secure SDLC as part of your safety story rather an afterthought.

Finally, reminder that the opinions expressed in this article are the opinion of The Privacy Design Lab. They are not legal advice, and no attorney-client relationship is formed by reading this article. If you need to consult legal counsel, you can book a consult with ARLA Strategies or other legal counsel you trust!

If you’re tired of privacy advice that only works in theory, you’re in the right place.

The Privacy Design Lab exists for people who want to practice privacy, not just talk about it. It focuses on practical, repeatable ways teams actually learn. We offer hands-on workshops, downloadable systems, and the Design Studio community where teams and practitioners can go deeper. Paid Fieldnotes subscribers get access to our full archive, plus supporting materials you can actually use.

If that sounds like your kind of work, we’d love to have you.

Keep Reading