Protocols over Platforms

We believe that for AI to be safe, its governance must be public, peer-reviewed, and mathematically verifiable.

SCD Protocol v3.1

Sovereign Compact Directive

The SCD is a formal protocol for defining the "Constitution" of a local AI system. It enforces truth-state classification (FACT/ESTIMATE/UNKNOWN) at the kernel level, ensuring that model outputs are strictly bounded by verifiable data.

Citation:

Desai, P. (2025). The Sovereign Compact Directive: A Protocol for Truth-State Enforcement in Local AI. Zenodo. https://doi.org/10.5281/zenodo.17787619

Publications & Preprints

Governance and Boundary Conditions for Reflective AI Systems

aiXiv Preprint · Dec 2025

Proposes a novel architecture where "Reflection" is not just a prompting strategy, but a separate, adversarial node in the system topology.

Read on aiXiv

Layered Governance for Large Language Model Systems

aiXiv Preprint · Dec 2025

A framework for "Governance as Code," moving safety checks from the model weights (RLHF) to the orchestration layer (Runtime).

Read on aiXiv

Our Validation Philosophy

1. Public Proof

All safety claims must be backed by reproducible code and public datasets. No "black box" safety.

2. Adversarial Design

Systems are secure only when they survive dedicated red-teaming. Our "M1 Adversary" node is proof of this commitment.