Agent Platform is a monorepo for building several AI-assisted product mini-apps on one shared foundation instead of creating a separate backend and worker stack for each idea.
The intent is to keep the platform boring at the infrastructure level and modular at the product level:
- one React frontend shell with separate routes per mini-app
- one FastAPI backend that owns APIs, orchestration, persistence, and integrations
- one Dramatiq worker for long-running jobs
- PostgreSQL as the durable system of record
- Redis only for queue, cache, and transient state
- Dynaconf for configuration and Structlog for logging
- CrewAI kept as an internal subsystem, never the public API surface
This repository is trying to turn a set of AI-enabled utilities into a coherent platform with strict boundaries.
The core idea is:
- mini-apps should feel independent at the product level
- they should still reuse the same platform primitives, job lifecycle, storage model, and deployment model
- deterministic logic should stay in services and repositories
- agents should be used only where interpretation or summarization is actually needed
- prompts should live in versioned files, not in Python code
The platform is being extended with the same shape for Telegram Deals, Flights, and Trains rather than rewritten again for each new app.
- Frontend talks only to FastAPI.
- Backend owns business logic, orchestration, persistence, and external integrations.
- Long-running work goes through the jobs system and the worker.
- Agents are internal helpers, not raw API concepts.
- PostgreSQL stores durable state.
- Redis stays limited to broker, cache, and transient execution state.
- Runtime prompts in
shared/prompts/stay separate from development prompts indocs/prompts/.
telegram_deals: scans Telegram messages, extracts structured deal data, uses AI only for controlled relevance evaluation, and persists accepted resultsflights: accepts natural-language flight requests, resolves them into typed searches, ranks provider-backed offers, summarizes results, and stores them by jobtrains: follows the same request-driven pattern for rail searches behind the same backend and jobs boundaries
frontend/: React application and mini-app routesbackend/src/app/: FastAPI app, jobs system, worker entrypoints, shared core modules, and mini-app backendsbackend/tests/: architecture and backend behavior testsshared/prompts/: runtime prompt files versioned by mini-appshared/crewai/: CrewAI runtime YAML configurationdocs/: decisions, plans, and task tracking for the platform itself
The main flow is:
Frontend -> FastAPI -> service layer -> worker/jobs -> PostgreSQL
Current API groups:
/api/health/api/jobs/*/api/telegram-deals/*/api/flights/*/api/trains/*
Current frontend routes:
/telegram-deals/flights/trains
Create a local env file first:
cp .env.example .envStart the stack with:
docker compose up --buildThis starts:
- frontend
- backend
- worker
- PostgreSQL
- Redis
Main local endpoints:
- frontend:
http://localhost:5173 - backend health:
http://localhost:8000/api/health - jobs API:
http://localhost:8000/api/jobs/{job_id}
Base stack startup does not require every external credential, but app flows that depend on them will fail until configured.
Common variables:
OPENAI_API_KEYSERPAPI_KEYTELEGRAM_API_IDTELEGRAM_API_HASHTELEGRAM_CHANNEL
If you want Telegram scanning to work, complete the interactive login setup once:
docker compose run --rm --profile tools telegram-loginThe backend uses uv. If you need to run backend commands locally and avoid container-owned virtualenv issues, use:
backend/scripts/uv-local.sh run pytest tests/test_health.pyTo recreate or repair the local backend environment:
backend/scripts/repair-local-venv.sh