Solo Dev Builds Multi-Agent Coding Tool After Leak
Claude Code Leaked & Cloned: How a Solo Dev Built a Multi-Agent Coding Tool Potentially Worth Billions
Discover the groundbreaking multi-agent coding tool built by a solo developer after the alleged leak of Claude Code. We cover the architecture, the market potential, and the future of autonomous AI development.
The concept of "leaked" or "cloned" AI code often sparks panic, but for developers, it represents a massive opportunity.

The Crisis Point: Why "Leaked" Code Isn't the End of Innovation
When foundational models and complex architectures become visible, the goal shifts from mere imitation to superior integration and specialization.
The core challenge in modern software development is complexity. Building a large-scale application requires not just writing code, but managing dependencies, debugging complex interactions, and maintaining architectural integrity. Tasks that traditionally require large teams of highly paid engineers.
The solo developer's breakthrough addresses this bottleneck head-on. Instead of relying on a single, monolithic AI prompt (which is susceptible to context window limits and hallucination), they built a system of cooperating AIs. This multi-agent architecture is the key differentiator.

Beyond the Code: The Economic Impact of Autonomous Development
The potential valuation of this technology is staggering precisely because it doesn't just improve coding; it fundamentally changes the economics of software creation.
If a single developer, utilizing this multi-agent framework, can achieve the output quality and speed of a small, outsourced development team, the cost of entry for complex software plummets. This has immediate, massive implications across multiple industries:
Startups: Previously, a Minimum Viable Product (MVP) required significant seed funding just to hire the initial engineering team. Now, the barrier to entry is reduced to the cost of running the AI platform itself. Enterprise Software: Large companies can use this tool to rapidly prototype internal tools, automating the tedious process of connecting disparate legacy systems. Education: It changes how we teach computer science, allowing students to focus on high-level architecture and problem-solving rather than rote syntax memorization.
What the 59.8 MB source map actually contained
The leak wasn't theatrical. On March 31, 2026, Anthropic shipped version 2.1.88 of the @anthropic-ai/claude-code npm package with a 59.8 MB JavaScript.map file inadvertently bundled in. That file is the kind of thing every webpack build produces. A debugging artifact that maps minified production code back to readable source. The standard practice is to strip it before publishing. Somebody didn't, and the package shipped to npm with the entire internal source mapped one-to-one to its production output.
Inside was the entire src/ directory of Claude Code: roughly 512,000 lines of TypeScript across ~1,900 files. Not the model. Not the weights. The agentic harness. The orchestration layer that gives Claude its tool use, file access, bash execution, and multi-agent coordination. That distinction matters more than most coverage acknowledged. The brain wasn't leaked. The body was. But for an agent product, the body is most of what makes it work.
Solayer Labs intern Chaofan Shou clocked it at 4:23 am ET. Within hours the codebase had been mirrored across GitHub in dozens of forks. Within 48 hours @realsigridjin had ported the whole thing to Python (using OpenAI's Codex, which is the funniest detail in this entire story) and pushed it as claw-code, which crossed 75,000 stars before Anthropic's DMCA notices started landing.
The unreleased features the leak exposed
The reverse-engineering community pulled out two unreleased features that tell you where Anthropic was actually going. KAIROS is an always-on proactive assistant that watches log streams and acts without being prompted. Closer to a daemon than an assistant, the kind of thing that runs in the background of your dev environment and surfaces issues before you ask. ULTRAPLAN offloads complex tasks to a remote Opus session for up to 30 minutes of deep planning before returning a concrete plan back to the local agent.
Both of those features change the framing of what a coding agent is. KAIROS isn't reactive. ULTRAPLAN isn't real-time. The architectural bet they reveal is that the next generation of agents lives on a much longer time horizon than "answer my prompt." An assistant that watches your terminal for ten hours and acts when something interesting happens is a fundamentally different product than the chat box. That's the part of the leak that hurts Anthropic most. Not the source code itself, which Anthropic argued was non-novel, but the roadmap signal it gave away.
The valuation question in the headline is harder to answer than it looks. The harness without the model is interesting but not billion-dollar interesting. The harness with a frontier model behind it. That's the bet. Solo devs cloning the harness now have the scaffolding. They still need a model. Which is exactly why Anthropic's Section D.4 enforcement (the no-reverse-engineering clause) became the next chapter of this story.
Related coverage
If this was useful, here is the rest of saavage.com's coverage on this beat: Goose vs Claude Code The Future of Free AI Coding, Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex, Cursor Challenges AI Giants: The New Coding Agent That Could Replace Codex and Claude, and Claude Just Leaked Their Lovable Killer... This Changes Everything.


