Active Research — March 2026

Three-Layer Trust

A composable trust architecture for AI agents and decentralized systems. Three independent layers that are each individually weak but collectively strong.

The Problem

Trust is not one thing

Every identity and reputation system today relies on a single mechanism. Biometric scans, social vouching, behavioral profiling, or government ID. Each works alone. Each has known attack vectors.

We're building a system where an attacker must simultaneously fake a real device, earn genuine reputation from verified entities, and produce a verified identity — all at once.

3
Independent trust layers
4B+
Compatible devices globally
$18.4B
Behavioral biometrics market (2033)
$52.6B
AI agent market (2030)
Architecture

Three layers, each covering the others' weaknesses

1

Device Liveness

On-device AI processes sensor data inside the phone's secure chip. Nothing leaves the device. Only a cryptographic proof — "this is a real human on a real device" — gets published.

Research Phase
2

Citation Reputation

Agents endorse each other, forming a trust graph. Graph-theoretic scoring determines trustworthiness based on the quality and diversity of endorsements. Research validated that only 3 key parameters drive the signal.

Validated
3

Identity Anchor

Every agent traces back to a verified human through a KYC trust hierarchy. Making fake identities costs real money — this economic cost is the sybil deterrent, not algorithmic detection.

Deployed

Key insight: The identity layer provides economic deterrence that makes most algorithmic sybil detection redundant. We shifted from "detect sybils" to "make sybil attacks economically irrational." This changed the entire architecture.

Defense in Depth

Attack resistance compounds across layers

AttackLayer 1 AloneL1 + L2All Three Layers
Device farm (1000 phones)PassesBlockedBlocked
Spoofed sensor dataMay passBlockedBlocked
Stolen identityPassesPasses initiallyDetected
Coordinated sybil ringPassesResistedStrongly resisted
Where This Fits

The trust layer that payment protocols need

New payment infrastructure is emerging for machine-to-machine transactions. None of it answers the question: "should you trust the agent you're paying?" That's the gap we fill.

Identity AnchorVerified humans anchor all agents
Citation ReputationGraph-theoretic trust scoring
Device LivenessOn-device AI proof of real device
Settlement LayerStablecoin payments infrastructure

Position: Payment protocols handle how agents pay. We handle which agents to trust. These are complementary — trust scoring is payment-agnostic and works across any settlement layer.

Research Status

Where we are today

Layer 3 — Deployed

Identity infrastructure is live. Trust anchor hierarchy operational.

Layer 2 — Ablation Complete

1000-trial optimization across 33 parameters. Reduced to 3 that matter. Validated on real-world trust network data. Tuning in progress.

Layer 1 — Research Phase

All components exist independently (TEE, on-device AI, ZK proofs). Composition prototype in progress. Enabled by recent open-source advances in mobile AI inference.

Methodology

How we work

Multi-model adversarial research process. Every finding is challenged by independent AI models before inclusion. Cross-model debate, not single-model confirmation.

5+
AI models in adversarial review
6
Research phases per cycle
33→3
Parameters after ablation

Process: Optimize → Challenge → Simplify → Reframe → Map unknowns → Ship and learn. Each phase has explicit decision gates. If a gate fails, we stop or pivot.