Scalable, event-sourced Agent execution architecture on:
- Kubernetes (sticky Pods per
thread_id) - Redis Streams (hot event log)
- Postgres (warm event store)
- SeaweedFS via S3 Gateway (cold archives)
This repository is intentionally structured as a Go monorepo with multiple small services under cmd/.
- Start dependencies:
This repo's docker-compose.yml uses port 6380 (Redis) and 5433 (Postgres) to avoid conflicts.
docker compose up -d- Build and run all services:
chmod +x scripts/run-local-m1.sh
./scripts/run-local-m1.sh- Alternatively, run each service manually:
Gateway:
make build
HTTP_ADDR=127.0.0.1:18081 bin/event-gatewayBeacon (REST + SSE):
make build
HTTP_ADDR=127.0.0.1:18082 bin/beaconReference agent:
make build
EVENT_GATEWAY_URL=http://127.0.0.1:18081 bin/reference-agentCreate a thread + turn:
curl -sS -X POST http://127.0.0.1:18080/turns \
-H 'content-type: application/json' \
-d '{"thread_id":"01J00000000000000000000000","turn_id":"01J00000000000000000000001","input":{"text":"hello"}}'Watch SSE:
curl -N "http://127.0.0.1:18082/threads/01J00000000000000000000000/events/stream"This starts Postgres persistence (warm store) and SeaweedFS S3 gateway (cold store), plus the read API.
- Start dependencies (Redis, Postgres, SeaweedFS):
docker compose up -d- Run persister + beacon:
chmod +x scripts/run-local-m25.sh
./scripts/run-local-m25.sh- Generate some events (run milestone-1 in another terminal):
chmod +x scripts/run-local-m1.sh
./scripts/run-local-m1.sh- Archive a seq range to S3 (JSONL.gz):
ARCHIVE_THREAD_ID=01J00000000000000000000000 \
ARCHIVE_FROM_SEQ=1 ARCHIVE_TO_SEQ=200 \
bin/archiver- List archives (manifest in Postgres):
curl -sS "http://127.0.0.1:18084/threads/01J00000000000000000000000/archives" | jq- Fetch an archive object (streamed from SeaweedFS S3):
curl -sS "http://127.0.0.1:18084/threads/01J00000000000000000000000/archives/<archive_id>" > archive.jsonl.gz