A composable trust architecture for AI agents and decentralized systems. Three independent layers that are each individually weak but collectively strong.
Every identity and reputation system today relies on a single mechanism. Biometric scans, social vouching, behavioral profiling, or government ID. Each works alone. Each has known attack vectors.
We're building a system where an attacker must simultaneously fake a real device, earn genuine reputation from verified entities, and produce a verified identity — all at once.
On-device AI processes sensor data inside the phone's secure chip. Nothing leaves the device. Only a cryptographic proof — "this is a real human on a real device" — gets published.
Research PhaseAgents endorse each other, forming a trust graph. Graph-theoretic scoring determines trustworthiness based on the quality and diversity of endorsements. Research validated that only 3 key parameters drive the signal.
ValidatedEvery agent traces back to a verified human through a KYC trust hierarchy. Making fake identities costs real money — this economic cost is the sybil deterrent, not algorithmic detection.
DeployedKey insight: The identity layer provides economic deterrence that makes most algorithmic sybil detection redundant. We shifted from "detect sybils" to "make sybil attacks economically irrational." This changed the entire architecture.
| Attack | Layer 1 Alone | L1 + L2 | All Three Layers |
|---|---|---|---|
| Device farm (1000 phones) | Passes | Blocked | Blocked |
| Spoofed sensor data | May pass | Blocked | Blocked |
| Stolen identity | Passes | Passes initially | Detected |
| Coordinated sybil ring | Passes | Resisted | Strongly resisted |
New payment infrastructure is emerging for machine-to-machine transactions. None of it answers the question: "should you trust the agent you're paying?" That's the gap we fill.
Position: Payment protocols handle how agents pay. We handle which agents to trust. These are complementary — trust scoring is payment-agnostic and works across any settlement layer.
Identity infrastructure is live. Trust anchor hierarchy operational.
1000-trial optimization across 33 parameters. Reduced to 3 that matter. Validated on real-world trust network data. Tuning in progress.
All components exist independently (TEE, on-device AI, ZK proofs). Composition prototype in progress. Enabled by recent open-source advances in mobile AI inference.
Multi-model adversarial research process. Every finding is challenged by independent AI models before inclusion. Cross-model debate, not single-model confirmation.
Process: Optimize → Challenge → Simplify → Reframe → Map unknowns → Ship and learn. Each phase has explicit decision gates. If a gate fails, we stop or pivot.