Analysis · March 31, 2026

The convergence: what Claude Code's leaked source reveals about the future of AI agents

On March 31, 2026, Anthropic accidentally shipped their entire Claude Code source code in an npm package. 512,000 lines. 44 hidden feature flags. A complete roadmap they never intended to publish. We've been building the same architecture — independently, on sovereign hardware — since January.

What leaked

Security researcher Chaofan Shou discovered that Claude Code v2.1.88 on npm contained an unobfuscated source map pointing to a zip archive on Anthropic's cloud storage. Inside: 1,900 TypeScript files — the full harness that makes Claude Code work. Not the model. The everything-else.

The most significant discovery wasn't the code itself — it was 44 feature flags for capabilities that are fully built but not shipped. Compiled code sitting behind flags that get stripped from the public build. A roadmap Anthropic never meant to publish.

The overlap

We've been building a sovereign personal AI system called KAI on an NVIDIA DGX Spark since January 2026. When we read the leaked source, we found that Anthropic arrived at the same architecture — independently, from the opposite direction.

Claude Code (leaked)KAI / Intern (ours)Status
KAIROS — always-on daemon modePhase C Ambient Mode — wake word + state machineLive
autoDream — background memory consolidationL7 Reflection Stream — persona coherence scoringLive
Coordinator Mode — multi-agent parallelIntern → Claude Code escalation bridgeLive
VOICE_MODE — push-to-talk (feature flag)Full streaming voice pipeline with barge-inLive
40+ tool registryWeb search, tool classifier, memory retrievalLive
Memory across sessionsRLM recursive memory with category-aware retrievalLive
Frustration regex detectionHallucination guardrails + persona drift detectionLive
ULTRAPLAN — offload to Opus 30minExecutive lane — deep LLM with expanded contextLive

Why it matters

This isn't about who built it first. It's about convergent evolution confirming that the industry has found the shape of the solution. Always-on daemon mode, background memory consolidation, multi-agent orchestration, voice interfaces, autonomous ticket execution — these aren't product features. They're the physics of the problem.

The divergence is in the ownership model. Anthropic built KAIROS as a feature of their $2.5B/year cloud product. It requires their API, their pricing, their permission. Your code travels through their servers. They built anti-distillation mechanisms to prevent you from learning how it works.

We built the same architecture on a DGX Spark in a house in Delaware. It uses open-source models. The code never leaves the machine. The entire system runs over Tailscale. No cloud dependency. No API bills. No permission required.

What we're releasing

Internis the autonomous dev agent extracted from KAI. It's the sovereign alternative to what Anthropic built behind KAIROS. Drop a ticket in a directory. Intern plans the edit using your local LLM, executes it, runs verification, and git commits. Failed tickets escalate. The backlog refills by scanning your codebase.

It's been running in production since March 2026 — 64+ tickets executed, 120+ tests written, zero human intervention. Works with vLLM, Ollama, OpenAI, or any OpenAI-compatible endpoint. MIT licensed.

The thesis

Anthropic is building KAIROS as a feature of Claude Code. Apple is building intelligence as a feature of iPhone. Google is building Gemini as a feature of Search.

We're building Uniform— a sovereign personal AI operating system where you own the compute, the memory, the identity, and the voice. The moat isn't the code. Anyone can read the leaked source. The moat is sovereignty.

Third Rail — Independent Research Lab

← Back to thirdrail.world