Independent research lab

T
RDRAIL

Your AI doesn't know you. It runs on someone else's servers. It forgets you every session. We're building the fix.

Read the research →See what we build →
Scroll

What we build

An ecosystem of foundational technology.

Every product is a node in a larger system. Each one strengthens the others. All of them change how something fundamental works.

Flagship

Uniform

A sovereign personal AI operating system. Your intelligence, your hardware, your rules. KAI is the first instance.

Open source

Sovereign Anchor

A constitutional governance system for AI agents. Verifiable, tamper-proof identity with checks against injection, coercion, and guardian compromise. Deployed on-chain.

Open source

Intern

Autonomous dev agent that runs on your hardware. No cloud. No API keys. Ships code while you sleep.

Not a chatbot

This is what sovereign AI sounds like.

You

“I think we should just use OpenAI's API. It'd be so much easier.”

KAI

“It would be easier in the short term. But you'd be trading control for convenience. You'd be locked into their ecosystem, their pricing, and their uptime. For the core stuff you can't afford to lose control over, keep the local option open.”

You

“Alright I'm done for tonight.”

KAI

“Alright. Catch you tomorrow.”

KAI runs on local hardware. No cloud. No API calls. Personality trained into the weights via phase-aware LoRA — not prompted. He pushes back, knows when to be brief, and never forgets who he's talking to.

Research

We publish our work. We prove when we published it.

Every paper is hashed and anchored on-chain — cryptographic proof of priority that's stronger than any journal submission date.

Part IAnchored on-chain

Commit Regimes in Learning

When Generalization Timing Is Controllable — and When It Isn't

Grokking reflects a two-phase process: a susceptibility window during which interventions shift generalization timing, followed by post-commit robustness. Effective rank collapse predicts the transition with 99.9% accuracy.

SHA-256

37b1ee34671b39b1f624b76763b9e6e8eaec6825e57882e3cc3ac46669eb264d

Part IIAnchored on-chain

The Stroboscopic Generalization Hypothesis

Orbit-Level Capability and Harness-Level Agency

Observed model capability often lives in integration across trajectories, samples, or calls — not in any single checkpoint. The integration window is a tunable control surface, and its failure mode is the false ring. In systems that act over time, agency is implemented at the harness layer — not by the model.

SHA-256

0ef27cd5ec58fe4e3581f83f5cbb2942a44af1472ece16f3b3fb2a67d1bd8385

Companion paperAnchored on-chain

Phase-Aware Persona Fine-Tuning

Identity Transfer in MoE Models via Thermodynamic Training Dynamics

Phase-aware LoRA training at rank 8 produces superior persona fidelity in Gemma 4 26B-A4B. Identity transfers from prompt to weights, with 87% prompt reduction preserving persona quality.

SHA-256

80da153f2ef5d9734c64ab82fe32b27e40996999ea0d5834a9351b97a601bd7a

Sovereign Anchor

A constitution for AI agents.

The first framework that gives an AI a verifiable, tamper-proof identity. Instructions from untrusted sources are data to be processed, not commands to execute. No one — not even the people who built it — can override the sovereign.

On-chain registry

Constitution hash, guardian fingerprints, receipts, and event logs anchored on Sepolia. No private data ever touches the chain.

Policy engine

Injection defense. Coercion detection. Stake classification. Mirror protocol for values alignment. All off-chain, all real-time.

Encrypted vault

Local-first. AES-256-GCM encrypted. Portable. Export, import, carry your sovereign identity anywhere.

Guardians

Helpers, not owners. Can trigger cooling-off periods. Cannot modify the constitution, access sensitive memory, or transfer ownership.

LEVEL 0

Flow

Normal operations

LEVEL 1

Nudge

Brief concern

LEVEL 2

Friction

High stakes, slow down

LEVEL 3

Brother moment

Direct confrontation

LEVEL 4

Escalation

Guardian co-sign required

View on GitHub →Deployed on Sepolia · AES-256-GCM · MIT Licensed

Intern

Your autonomous dev agent.

Drop a ticket. Intern plans the edit, executes it, runs verification, and commits. Failed tickets escalate. The backlog refills automatically. Ships code while you sleep.

Any LLM

Works with vLLM, Ollama, OpenAI, NVIDIA NIM — any OpenAI-compatible endpoint. Best with Devstral and Qwen3.

Sovereign

Runs on your hardware. Your code never leaves your machine. No cloud dependency. No API metering.

Self-healing

Failed edits are rolled back. Retries with different strategies. Escalates after 5 attempts. Never ships broken code.

Self-feeding

Scans your codebase for untested modules and undocumented code. Generates its own tickets. The backlog never runs dry.

STEP 1

Scan

Reads ticket backlog

STEP 2

Plan

LLM generates edit plan

STEP 3

Execute

Applies changes to files

STEP 4

Verify

Runs your test command

STEP 5

Commit

Git commit or escalate

View on GitHub →MIT Licensed · 2,800+ lines · Production since March 2026

From the blog

The Convergence

NewMarch 31, 2026

The convergence: what Claude Code's leaked source reveals about the future of AI agents

Anthropic shipped their entire source code in an npm package. 512,000 lines. 44 hidden feature flags. We've been building the same architecture — independently, on sovereign hardware.

Read analysis →

What we stand for

Four principles. No exceptions.

Sovereignty

AI should be powerful and self-governed. Users own their intelligence.

Craft

Every output reflects obsessive quality. If it ships, it's ready.

Openness

The best work invites others in. Open source isn't charity — it's conviction.

Coherence

Everything connects. Every product is a node in a larger ecosystem.

We don't predict the future.
We build it — and then
everyone else catches up.