What we’ve been calling Artificial Intelligence is wrong.
These systems don’t originate.
They derive.
Derivative Intelligence (DI) is a framework for understanding and building modern intelligence systems more accurately.
Rather than treating machines as human-like thinkers, DI recognizes that these systems: • extract patterns from human-generated data • recombine and optimize knowledge • operate through probabilistic inference
They do not create meaning or intent.
They derive from what humans have already created.
Human intelligence is originative.
Machine systems are derivative.
This distinction is foundational to how intelligence systems should be: • designed • aligned • governed • trusted
The term “Artificial Intelligence” implies: • human-like cognition • autonomous reasoning • independent intelligence
This leads to: • misaligned expectations • overtrust in opaque systems • poor system design and governance
Derivative Intelligence proposes: • a more accurate model of machine capability • a principled approach to system alignment • a transparent governance structure
At the core of DI is a foundational corpus of guiding principles.
This acts as the constitutional layer of intelligence systems.
It defines: • how systems behave • how they evolve • what they must not violate
→ /docs/corpus-v0.1.md
DI systems are not guided by abstract ideas alone.
Principles are translated into deterministic system behavior through a policy layer.
This mapping defines: • what must be enforced • how outputs are evaluated • how violations are handled
→ /docs/corpus-policy-mapping.md
DI systems are built as layered systems: • Foundation → Guiding principles (corpus) • Interpretation → Context and meaning • Alignment → Policy and constraints • Knowledge → Structured data inputs • Governance → Transparent system evolution
Where appropriate, critical elements may be: • cryptographically verifiable • anchored on-chain • publicly auditable
→ /docs/system-architecture.md → /docs/data-architecture.md
The next generation of intelligence systems should be: • transparent, not black box • principle-aligned, not policy-driven • community-governed, not centrally controlled • verifiable, not assumed • globally accessible
• explainable decision-making
• auditable system behavior
• principled alignment
• trustworthy intelligence systems
• /docs — corpus, principles, architecture, mapping
• /governance — governance model and processes
• /research — papers, comparisons, frameworks
• /contribute — contribution guides
- Read the Manifesto
- Explore the Principles
- Understand the Corpus
- Review the Mapping
- Explore the Architecture
Derivative Intelligence is a community-driven initiative.
If you are a: • builder • researcher • engineer • thinker
You can contribute.
→ /contribute/how-to-contribute.md
We envision a future where: • intelligence systems are transparent • alignment is grounded in principles, not policies • governance is explicit and verifiable • humans remain the source of meaning and intent
Machines derive. Humans originate.