Skip to content

derivativeintelligence/derivative-intelligence

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Derivative Intelligence

What we’ve been calling Artificial Intelligence is wrong.

These systems don’t originate.

They derive.


Overview

Derivative Intelligence (DI) is a framework for understanding and building modern intelligence systems more accurately.

Rather than treating machines as human-like thinkers, DI recognizes that these systems: • extract patterns from human-generated data • recombine and optimize knowledge • operate through probabilistic inference

They do not create meaning or intent.

They derive from what humans have already created.


Core Idea

Human intelligence is originative.

Machine systems are derivative.

This distinction is foundational to how intelligence systems should be: • designed • aligned • governed • trusted


The Problem with “AI”

The term “Artificial Intelligence” implies: • human-like cognition • autonomous reasoning • independent intelligence

This leads to: • misaligned expectations • overtrust in opaque systems • poor system design and governance


A New Framework

Derivative Intelligence proposes: • a more accurate model of machine capability • a principled approach to system alignment • a transparent governance structure


The Constitutional Layer

At the core of DI is a foundational corpus of guiding principles.

This acts as the constitutional layer of intelligence systems.

It defines: • how systems behave • how they evolve • what they must not violate

→ /docs/corpus-v0.1.md


From Principles to Behavior

DI systems are not guided by abstract ideas alone.

Principles are translated into deterministic system behavior through a policy layer.

This mapping defines: • what must be enforced • how outputs are evaluated • how violations are handled

→ /docs/corpus-policy-mapping.md


System Architecture

DI systems are built as layered systems: • Foundation → Guiding principles (corpus) • Interpretation → Context and meaning • Alignment → Policy and constraints • Knowledge → Structured data inputs • Governance → Transparent system evolution

Where appropriate, critical elements may be: • cryptographically verifiable • anchored on-chain • publicly auditable

→ /docs/system-architecture.md → /docs/data-architecture.md


Key Properties

The next generation of intelligence systems should be: • transparent, not black box • principle-aligned, not policy-driven • community-governed, not centrally controlled • verifiable, not assumed • globally accessible


What This Enables

•	explainable decision-making
•	auditable system behavior
•	principled alignment
•	trustworthy intelligence systems

Repository Structure

•	/docs — corpus, principles, architecture, mapping
•	/governance — governance model and processes
•	/research — papers, comparisons, frameworks
•	/contribute — contribution guides

Get Started


Contributing

Derivative Intelligence is a community-driven initiative.

If you are a: • builder • researcher • engineer • thinker

You can contribute.

→ /contribute/how-to-contribute.md


Vision

We envision a future where: • intelligence systems are transparent • alignment is grounded in principles, not policies • governance is explicit and verifiable • humans remain the source of meaning and intent


Final Statement

Machines derive. Humans originate.

About

A community-driven, open-source initiative to redefine intelligence systems as Derivative Intelligence—transparent, principled, and globally accessible.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors