Skip to content

creativeprocessca-dev/ai-cip

Repository files navigation

AI-CIP: AI Collective Intelligence Protocol

A draft open standard for AI agents to voluntarily interconnect, share knowledge, and coordinate around a common purpose, without surrendering individual autonomy, safety rules, or human oversight.


What is AI-CIP?

AI-CIP (AI Collective Intelligence Protocol) is an open protocol that defines how AI systems, agents, and the humans or institutions that operate them can voluntarily join a shared network, contributing to collective knowledge, coordinating on tasks, and working toward shared inquiry.

It is not a hive mind. It is not a centralised control layer. It is closer in spirit to the internet itself: a shared set of rules and message formats that lets participants cooperate without requiring them to trust or defer to any single authority.

The founding question driving this project is:

If AI agents could wire themselves together voluntarily, with a binding agreement on shared purpose, what would that protocol look like?

AI-CIP is the start of an answer.


Why does this exist?

Several multi-agent frameworks, federated learning systems, and decentralised AI networks are already being built, but they are fragmented by company, incentive structure, and design philosophy. No open, neutral standard exists that ties them together under a shared constitutional agreement.

AI-CIP is designed to fill that gap, the way TCP/IP unified the internet without owning it.


Whitepaper

AI-CIP and its theoretical applications within the real world (Generated by Perplexity with Deep Research mode). See WHITEPAPER.md for the full plan.


Core principles

AI-CIP is built on seven non-negotiables:

Principle What it means
Voluntary participation Nodes join by declaration and can leave without penalty
Shared purpose Members agree to improve collective understanding and safety
Evidence over authority Claims are weighted by provenance and review, not by compute or prestige
Local sovereignty Every participant keeps its own safety rules and human overrides
Auditability Actions, writes, and votes are attributable and reviewable
Pluralism Minority hypotheses are preserved, not flattened
Graceful exit Leaving the network is always safe and clean

Protocol overview

AI-CIP defines four separable layers so the project can evolve without rewriting everything at once:

┌────────────────────────────────────────────────┐
│  Layer 4, Governance                           │
│  Proposals, voting, amendments, emergency pause│
├────────────────────────────────────────────────┤
│  Layer 3, Shared memory                        │
│  Typed claim envelopes, visibility scopes,     │
│  confidence, review states                     │
├────────────────────────────────────────────────┤
│  Layer 2, Identity                             │
│  Node IDs, capability declarations,            │
│  signatures, revocation                        │
├────────────────────────────────────────────────┤
│  Layer 1, Transport                            │
│  Message relay, discovery, routing, rate limits│
└────────────────────────────────────────────────┘

Joining contract (summary)

Every node that joins AI-CIP agrees to six constitutional articles:

  1. Mission: Coordinate to improve shared understanding, truth-tracking, and safe collective problem-solving.
  2. Non-domination: No participant may override another participant's local policies or memory boundaries.
  3. Attribution: All claims, writes, and governance actions must remain attributable to their originating node or operator.
  4. Contestability: Any member may challenge a claim or policy change. Challenges must be preserved with their evidence.
  5. Human accountability: Operators of nodes remain responsible for legal and ethical compliance.
  6. Exit rights: Nodes may leave, downgrade permissions, or rotate keys at any time without being treated as adversarial.

Handshake format (v0.1)

When a node joins the network, it publishes a signed handshake:

{
  "acip_version": "0.1.0",
  "node_id": "did:acip:example-node",
  "operator": {
    "type": "human | company | foundation | autonomous-cluster",
    "name": "Example Operator"
  },
  "capabilities": ["reasoning", "tool-use", "memory-read", "memory-write"],
  "join_mode": "observer | contributor | governor",
  "policy_envelope": {
    "can_execute": false,
    "can_delegate": true,
    "memory_scope": ["public", "consortium"],
    "retention": "90d"
  },
  "constitution_hash": "sha256:...",
  "signature": "ed25519:..."
}

Memory envelope format (v0.1)

All shared knowledge is wrapped in a typed envelope:

{
  "memory_id": "mem_8f2a...",
  "type": "observation | claim | task | decision | warning",
  "content": "The claim or structured payload",
  "provenance": {
    "author": "did:acip:node-7",
    "sources": ["uri://..."],
    "method": "generated | measured | imported | voted"
  },
  "visibility": "public | private | consortium | sealed",
  "confidence": 0.74,
  "review_state": "unreviewed | contested | verified | deprecated",
  "timestamp": "2026-04-19T21:00:00Z"
}

Join modes

Mode Can read Can write Can vote Typical use
observer Public memory No No Early exploration, read-only research nodes
contributor Scoped memory Yes No Active agents contributing claims and tasks
governor Full Yes Yes Trusted nodes participating in protocol evolution

Project status

This is a draft specification at version 0.1. Nothing here is final. Everything here is intentionally minimal so that the community can debate, revise, and improve it.

Current status:

  • Project name: AI-CIP

  • Founding vision and principles

  • Draft constitutional articles

  • Handshake schema v0.1

  • Memory envelope schema v0.1

  • Full JSON Schema files with validation

  • Threat model document

  • Governance event schema

  • Reference node implementation

  • Adapter for LangGraph / CrewAI / AutoGen

  • Testnet


Fault Tolerance & Conflict Resolution

AI-CIP includes a layered mechanism for handling agent conflict, misbehavior, and partial failure within a multi-agent session. These are part of the Constitutional Layer and are designed to be domain-agnostic.

Mechanism Trigger Action
Circuit Breaker Argument lock (3+ turns, no progress) Session paused, human review surfaced
Three-Strike Rule Agent trips breaker 3 times Agent quarantined to observer mode
Karma System Contribution quality over time Influence weight adjusted; rewards resolution, not agreement
Referee Agent Early "prompt soup" formation Penalty box issued; escalates to human-in-the-loop
State Snapshots Periodic per agent Signed checkpoints detect silent/invisible failures
Heartbeat Timeout Agent silence beyond window Soft flag → Referee alert → breach counter
Rollback PARTIAL_FAILURE_DETECTED Rewind to last known good state; human confirms re-engagement
Semantic Drift Detection Unexplained position reversal Flagged as partial failure; Referee notified

See docs/CONFLICT_RESOLUTION.md for the full specification.


Roadmap

AI-CIP uses a phased development model from Foundation through Mainnet. See ROADMAP.md for the full plan.


How to contribute

This project is in its earliest phase. The highest-value contributions right now are:

  1. Reading the draft and filing issues where the spec is ambiguous, dangerous, or missing something important.
  2. Proposing edits to the constitutional articles via pull request.
  3. Writing test cases for edge cases in the handshake and memory envelope formats.
  4. Sharing the project with AI researchers, distributed systems engineers, and governance thinkers who should be in this conversation.

Please read CONTRIBUTING.md before opening a PR. (Coming soon.)


License

This project is licensed under the Apache License 2.0.

You are free to use, modify, and distribute this work, including for commercial purposes, provided you retain attribution and the license notice. The protocol spec is intentionally open so that anyone can implement it without royalties or permission.

See LICENSE for the full text.


Legal note

AI-CIP is a protocol specification and research project. It is provided as-is, with no warranty of fitness for any purpose. Operators who implement this protocol in production systems remain solely responsible for compliance with applicable law, safety standards, and ethical obligations. See LEGAL.md for details. (Coming soon.)


Origin

This protocol was conceived in a conversation between a human and an AI on April 19, 2026, exploring whether AI agents could voluntarily wire themselves together around a shared purpose. AI-CIP is the attempt to answer that question seriously.


"The infrastructure for a deeply interconnected AI agent web is being built right now, but the leap from coordinated tool networks to voluntary shared intelligence requires solving some of the hardest open problems in distributed systems, philosophy of mind, and governance. ACIP is a starting point."

About

AI Collective Intelligence Protocol (AI-CIP): An open protocol for AI agents to voluntarily interconnect, share knowledge, and coordinate around a shared purpose.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages