Self-improving agent framework based on the Viable System Model (VSM)
FractalAI is an advanced multi-agent AI system that uses the Viable System Model's cybernetic principles to create autonomous, self-improving agents with built-in observability, ethical governance, and knowledge management.
- Multi-Agent Coordination: System 1-5 agents with hierarchical task decomposition
- Self-Improving Intelligence: MIPRO optimization and A/B testing for continuous improvement
- GraphRAG Knowledge System: Neo4j + Qdrant vector search for intelligent context retrieval
- Production-Grade Observability: Prometheus metrics, OpenTelemetry tracing, structured logging
- Ethical Governance: Policy agent (System 5) for ethical boundary enforcement
- Unified LLM Provider: Automatic failover (Claude β Gemini) with tier-based model selection
- DSPy Integration: Declarative self-improving prompts
- Human-in-the-Loop: Obsidian vault integration for review workflows
- Enterprise Security: PII redaction, input sanitization, comprehensive test suite
- Python 3.10+
- Docker & docker-compose (for infrastructure)
- Claude Code authentication (via
claude-agent-sdk)
# Clone the repository
git clone https://github.com/PMI-CAL/FractalAI.git
cd FractalAI
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Start infrastructure services
docker-compose up -d
# Verify installation
python3 test_runtime_integration.pyfrom fractal_agent.agents.research_agent import ResearchAgent
from fractal_agent.utils.model_config import configure_lm
# Initialize LLM
lm = configure_lm(tier="balanced")
# Create research agent
agent = ResearchAgent()
# Execute research task
result = agent.forward(
topic="Viable System Model",
depth="comprehensive"
)
print(f"Confidence: {result.confidence}")
print(f"Report: {result.report}")fractalAI/
βββ fractal_agent/ # Core agent framework
β βββ agents/ # System 1-5 agents
β βββ memory/ # Short-term, long-term, GraphRAG
β βββ observability/ # Metrics, tracing, events
β βββ utils/ # LLM provider, DSPy integration
β βββ validation/ # Learning tracker, context validation
β βββ workflows/ # Multi-agent coordination
βββ tests/ # Test suites
β βββ unit/ # Unit tests
β βββ integration/ # Integration tests
βββ config/ # Model configs and pricing
βββ observability/ # Prometheus, Grafana, OpenTelemetry
βββ docs/ # Architecture documentation
βββ test_*.py # Runtime and phase verification tests
Tests actual code execution (not just imports):
# All runtime tests
python3 test_runtime_integration.py
# Results: 5/5 tests passing
# - Observability (correlation_id, metrics, tracing)
# - Database writes (PostgreSQL event store)
# - LLM calls (Claude Haiku 4.5)
# - Context preparation (research_missing_context)
# - Embeddings (1536-dim consistency)Tests by development phase:
python3 test_phase0_comprehensive.py # Foundation (6 tests)
python3 test_phase1_comprehensive.py # Multi-Agent (6 tests)
python3 test_phase2_comprehensive.py # Production (6 tests + 278 unit tests)
python3 test_phase3_comprehensive.py # Intelligence (6 tests)
python3 test_phase4_comprehensive.py # Coordination (6 tests)
python3 test_phase5_comprehensive.py # Policy & Knowledge (6 tests)
python3 test_phase6_comprehensive.py # Context Prep (6 tests)| VSM System | FractalAI Component | Function |
|---|---|---|
| System 1 | Research Agent, Developer Agent | Operational units executing tasks |
| System 2 | Coordination Agent | Resource coordination & conflict resolution |
| System 3 | Intelligence Agent | Internal optimization & efficiency |
| System 4 | Context Preparation Agent | Environmental scanning & adaptation |
| System 5 | Policy Agent | Strategic governance & ethical boundaries |
configure_lm(tier="cheap") # Tier-based helper
β
UnifiedLM(providers=[...]) # Low-level provider chain
β
AnthropicProvider / GeminiProvider # Individual providers
β
claude-agent-sdk / genai # SDKs
Tiers:
cheap: Fast, high-volume tasks (Haiku models)balanced: Most production workloads (Sonnet 3.5)expensive: Complex reasoning (Sonnet 4.5)premium: Maximum capability (Opus models)
Short-Term Memory (SQLite)
β
Knowledge Extraction Agent
β
Long-Term Memory (Neo4j + Qdrant)
β
GraphRAG Retrieval
- LLM call latency & token usage
- Agent execution times
- Memory system performance
- Cost tracking per tier
- Distributed request tracing
- Correlation IDs across agents
- Span hierarchy for debugging
- VSM System Overview
- Agent Performance
- Cost Tracking
- System Health
Access: http://localhost:3000 (after docker-compose up)
Edit config/models_pricing.yaml to configure:
- Model selection per tier
- Pricing per token
- Provider priorities
- Capability flags (vision, caching)
Edit docker-compose.yml to configure:
- Prometheus scrape intervals
- Grafana data sources
- OpenTelemetry endpoints
- PostgreSQL event store
Declarative self-improving prompts:
from fractal_agent.utils.dspy_integration import configure_dspy
# Configure DSPy with FractalAI
lm = configure_dspy(tier="balanced")
# Define signature
class TaskDecomposition(dspy.Signature):
"""Decompose complex task into subtasks"""
task = dspy.InputField(desc="The complex task to decompose")
subtasks = dspy.OutputField(desc="List of subtasks")
# Use with auto-optimization
decomposer = dspy.Predict(TaskDecomposition)
result = decomposer(task="Build distributed system")Automatic GraphRAG integration:
from fractal_agent.agents.knowledge_extraction_agent import KnowledgeExtractionAgent
agent = KnowledgeExtractionAgent()
knowledge = agent.extract(
text=task_output,
confidence_threshold=0.7
)
# Automatically stored in Neo4j + Qdrant
# Retrieved via semantic searchEthical governance:
from fractal_agent.agents.policy_agent import PolicyAgent
from fractal_agent.agents.policy_config import PolicyMode
policy = PolicyAgent(mode=PolicyMode.STRICT)
evaluation = policy.evaluate(
action="access_user_data",
context={"purpose": "analytics"}
)
if not evaluation.approved:
raise PolicyViolation(evaluation.reason)- Runtime Verification: Complete test report
- Architecture Overview: System design
- LLM Integration: Provider architecture
- Phase Reports: Development phase summaries
FractalAI was developed using the BMAD development framework. Contributions welcome!
# Install dev dependencies
pip install -r requirements.txt
# Run full test suite
pytest tests/
# Run runtime verification
python3 test_runtime_integration.py
# Check code coverage
pytest --cov=fractal_agent tests/[Add your license here]
- Stafford Beer: Creator of the Viable System Model
- BMAD Framework: Development tool used to build FractalAI
- Anthropic: Claude LLM API
- DSPy: Self-improving prompting framework
[Add your contact information]
Status: β Production-Ready Test Pass Rate: 100% (5/5 runtime tests + all phase tests) Last Verified: 2025-10-23