Sub-millisecond, Kafka-compatible message broker written in C++20.
Zero dependencies. Single binary. Drop-in Kafka replacement for development and testing.
Kafka is powerful but heavy — JVM, ZooKeeper/KRaft, gigabytes of RAM, slow startup. For local development and testing, you just need something that speaks the Kafka protocol and gets out of the way.
Develop locally against StrikeMQ, deploy to real Kafka in production. Think of it like LocalStack for AWS or SQLite for PostgreSQL — a lightweight stand-in that keeps your dev loop fast while your production stack stays unchanged.
StrikeMQ is a 52KB native binary that starts in milliseconds, uses 0% CPU when idle, and works with any Kafka client library out of the box.
- Kafka wire protocol — Works with librdkafka, kafka-python, confluent-kafka-go, kcat, and any Kafka client
- Built-in REST API — Inspect topics, peek at messages, produce, and manage consumer groups with just
curl - Sub-millisecond produce latency — Lock-free data structures, memory-mapped storage, zero-copy where possible
- Zero dependencies — Pure C++20, no JVM, no ZooKeeper, no third-party libraries
- Multi-threaded I/O — Acceptor + N worker threads, each with own kqueue (macOS) / epoll (Linux), 0% CPU when idle
- Consumer groups — JoinGroup, SyncGroup, Heartbeat, OffsetCommit/Fetch — full rebalance protocol
- Auto-topic creation — Topics created on first produce or metadata request
- Cross-platform — macOS (Apple Silicon + Intel) and Linux (x86-64)
docker run -p 9092:9092 -p 8080:8080 awneesh/strikemqThat's it — any Kafka client can connect to 127.0.0.1:9092, and the REST API is available at 127.0.0.1:8080.
brew tap awneesht/strike-mq
brew install strikemq
strikemqmkdir -p build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
cmake --build ../build/strikemq═══════════════════════════════════════════
StrikeMQ v0.1.4 — Sub-Millisecond Broker
═══════════════════════════════════════════
Platform: macOS (kqueue)
Kafka Port: 9092
HTTP API: 8080
IO Threads: 10 (auto)
═══════════════════════════════════════════
HTTP API listening on 0.0.0.0:8080
Listening on 0.0.0.0:9092
Broker ready. Press Ctrl+C to stop.
# List broker metadata
kcat -b 127.0.0.1:9092 -L
# Produce messages
echo -e "hello\nworld\nstrike" | kcat -b 127.0.0.1:9092 -P -t my-topic
# Consume messages
kcat -b 127.0.0.1:9092 -C -t my-topic -e
# Consume with consumer group
kcat -b 127.0.0.1:9092 -G my-group my-topicfrom kafka import KafkaProducer, KafkaConsumer
# Produce
producer = KafkaProducer(bootstrap_servers='127.0.0.1:9092')
producer.send('my-topic', b'hello from python')
producer.flush()
# Consume (simple)
consumer = KafkaConsumer('my-topic', bootstrap_servers='127.0.0.1:9092',
auto_offset_reset='earliest')
for msg in consumer:
print(msg.value.decode())
break
# Consume with consumer group
consumer = KafkaConsumer('my-topic', group_id='my-group',
bootstrap_servers='127.0.0.1:9092',
auto_offset_reset='earliest')
for msg in consumer:
print(msg.value.decode())
breakStrikeMQ includes a built-in REST API on port 8080 — no Kafka tooling required. Inspect broker state, peek at messages, produce, and manage topics with just curl.
| Method | Path | Description |
|---|---|---|
| GET | /v1/broker |
Broker info (version, uptime, config) |
| GET | /v1/topics |
List all topics with partition count and offsets |
| GET | /v1/topics/{name} |
Topic detail with per-partition offsets |
| DELETE | /v1/topics/{name} |
Delete a topic and its data |
| GET | /v1/topics/{name}/messages?offset=0&limit=10 |
Peek at messages as JSON |
| POST | /v1/topics/{name}/messages |
Produce messages via JSON body |
| GET | /v1/groups |
List consumer groups |
| GET | /v1/groups/{id} |
Group detail with members and committed offsets |
# Broker info
curl localhost:8080/v1/broker
# {"version":"0.1.4","broker_id":0,"uptime_seconds":42,"port":9092,"http_port":8080,...}
# List topics
curl localhost:8080/v1/topics
# [{"name":"my-topic","partitions":1,"start_offset":0,"end_offset":5}]
# Topic detail
curl localhost:8080/v1/topics/my-topic
# {"name":"my-topic","partitions":[{"partition":0,"start_offset":0,"end_offset":5}]}
# Peek at messages
curl "localhost:8080/v1/topics/my-topic/messages?offset=0&limit=3"
# [{"offset":0,"timestamp":1234567890,"key":null,"value":"hello"},...]
# Produce messages
curl -X POST localhost:8080/v1/topics/my-topic/messages \
-d '{"messages":[{"value":"hello"},{"key":"k1","value":"world"}]}'
# {"topic":"my-topic","partition":0,"base_offset":5,"record_count":2}
# Delete a topic
curl -X DELETE localhost:8080/v1/topics/my-topic
# {"deleted":"my-topic","partitions_removed":1}
# List consumer groups
curl localhost:8080/v1/groups
# [{"group_id":"my-group","state":"Stable","generation_id":1,"member_count":1}]
# Consumer group detail
curl localhost:8080/v1/groups/my-group
# {"group_id":"my-group","state":"Stable","generation_id":1,"protocol_type":"consumer",...}Messages produced via REST are fully compatible with Kafka consumers, and vice versa.
| Metric | Value |
|---|---|
| Produce latency (p99.9) | < 1ms |
| CPU when idle | 0% |
| Memory footprint | ~1MB + mmap'd segments |
| Startup time | < 10ms |
| Binary size | 52KB |
Run benchmarks: ./build/strikemq_bench
| API | Versions | Status |
|---|---|---|
| ApiVersions | 0–3 | Full (including flexible versions) |
| Metadata | 0 | Full (auto-topic creation) |
| Produce | 0–5 | Full (persists to disk) |
| Fetch | 0–4 | Full (zero-copy from mmap'd segments) |
| ListOffsets | 0–2 | Full (earliest/latest offset resolution) |
| FindCoordinator | 0–2 | Full (returns self as coordinator) |
| JoinGroup | 0–3 | Full (rebalance protocol, member assignment) |
| SyncGroup | 0–2 | Full (leader distributes partition assignments) |
| Heartbeat | 0–2 | Full (session management, rebalance signaling) |
| LeaveGroup | 0–1 | Full (clean consumer shutdown) |
| OffsetCommit | 0–3 | Full (in-memory offset storage) |
| OffsetFetch | 0–3 | Full (retrieve committed offsets) |
| CreateTopics | 0–3 | Advertised |
├── include/
│ ├── core/types.h # Broker config, topic types
│ ├── http/http_server.h # REST API server, JSON writer
│ ├── network/tcp_server.h # Multi-threaded TCP server (acceptor + workers)
│ ├── protocol/kafka_codec.h # Kafka encoder/decoder/router
│ ├── storage/partition_log.h # mmap'd log segments + index
│ ├── storage/consumer_group.h # Consumer group state management
│ └── utils/ # Ring buffers, memory pool, endian
├── src/
│ ├── main.cpp # Broker orchestration
│ ├── http/http_server.cpp # REST API handlers + HTTP parser
│ ├── network/tcp_server.cpp # kqueue/epoll implementation
│ └── protocol/kafka_codec.cpp # Wire protocol codec
├── tests/ # Unit tests
├── benchmarks/ # Latency microbenchmarks
└── docs/ # Architecture, changelog, logo
cd build && ctestOr individually:
./strikemq_test_ring # Lock-free ring buffer
./strikemq_test_pool # Memory pool allocator
./strikemq_test_codec # Kafka protocol codec- Architecture — System design, layer details, platform support
- Product Guide — Full feature reference and configuration
- Changelog — Version history and bug fixes
StrikeMQ is designed for local development and testing, not production. It trades durability and fault-tolerance for simplicity and speed.
- Consumer group offsets are in-memory only (lost on broker restart)
- No replication (single broker)
- No SASL/SSL authentication
- No log compaction or retention enforcement
MIT
