BOINC for the AI safety research era — modern engineering and crypto. No tokens, purpose-built for LLM/agent research demands.
A volunteer-driven, open-source distributed compute network for AI safety research. Researchers propose experiments; volunteers worldwide donate compute by running a sandboxed worker. Tenant-neutral by design; the first tenant on the network is Sentinel — multi-agent LLM behavioral drift research.
Phase 0 — Foundation. Project bootstrap underway. Code begins in Phase 1, starting with a coordinator daemon, tenant SDK, multi-platform worker installer, and execution of D6 (a week-long continuous multi-agent experiment). Phase 2 opens trusted-beta participation; Phase 3 opens to public volunteers.
Public site live at auspexai.network.
- AGPL-3.0 licensed — strong copyleft so derivative work and network-served forks remain open
- Donate-only model — no crypto, no tokens, no compensation
- Recognition without gamification — signed contribution receipts, mandatory tenant acknowledgment in publications; no leaderboards, no scores, no badges
- Volunteers never paste keys — OAuth Device Flow + OS-keystore for all credentials
- Untrusted-worker by default — result replication and signed submissions are core, not optional
- Multi-tenant from day one — Sentinel runs as the first tenant; the SDK and tenant-acceptance process are open to other research projects
- Governance — roles, decision rules, recruitment, conflict of interest
- Code of Conduct — community standards, reporting, escalation pathway
- Contributing — how to contribute (Platform Contributor and Researcher paths)
- Research Ethics Policy — what AI safety research can run on the network and how it's reviewed
- General: contact@auspexai.network
- Research / tenant inquiries: research@auspexai.network
- Security: security@auspexai.network
- Conduct: conduct@auspexai.network
Watch this organization for repository activity as Phase 1 begins.