Real-world performance benchmarks for ZAP Protocol comparing against Protobuf, JSON, and traditional architectures.
Core encoding/decoding performance comparisons.
MCP server memory overhead and multi-agent orchestration:
- Claude Code with 100 individual MCP servers
- Hanzo Dev with 1 ZAP router (proxying to 100 MCP servers)
- 20 parallel sub-agent task execution
Warp messaging and consensus operations:
- Cross-chain message encoding
- Validator set updates
- State proof verification
- Consensus round-trip times
Distributed AI inference:
- KV cache shard transfers
- Model weight distribution
- Batch prompt encoding
# Install dependencies
make setup
# Run all benchmarks
make bench
# Run specific benchmark suite
make bench-agents
make bench-blockchain
make bench-serialize
make bench-inference
# Generate reports
make report- Go 1.21+
- Python 3.11+
- Node.js 20+ (for MCP benchmarks)
- Docker (optional, for isolated tests)
Results are written to results/ directory in JSON format and can be visualized with:
make chartsAll benchmarks follow these principles:
- Reproducibility: Fixed seeds, controlled environments
- Statistical rigor: Multiple iterations, percentile reporting
- Fair comparison: Same data structures across formats
- Real workloads: Based on actual Hanzo/Lux production patterns
See METHODOLOGY.md for detailed methodology.
Apache 2.0