Opinionated scaffold for an AI-driven customer support agent (ai-support-agent).
Overview -\tPurpose: Provide a modular, production-minded Python/ FastAPI project layout for building conversational support agents that orchestrate LLMs, knowledge retrieval (RAG), and external tools (Shopify, Helpdesk, Notifications). -\tGoals: clear separation of concerns, testable agents, pluggable LLM providers, and observability.
Quickstart
-\tCopy environment variables from .env.example to .env and set secrets (e.g., OPENAI_API_KEY).
-\tInstall dependencies:
python -m pip install -r requirements.txt-\tRun locally:
python main.py
# or with uvicorn
uvicorn main:app --reload --port 8000-\tDocker (build & run):
docker build -t ai-support-agent .
docker run -p 8000:80 --env-file .env ai-support-agentProject Layout (top-level)
ai-support-agent/
├── .env.example # example environment variables
├── .gitignore
├── README.md # this file
├── requirements.txt # pip deps for the app
├── Dockerfile
├── docker-compose.yml
├── pyproject.toml
├── main.py # app entrypoint
High-level packages
-
app/— HTTP layer, routers, middleware, pydantic schemas, dependency wiring.app/main.pyexposescreate_app()building FastAPI and registering routes.
-
domain/— Pure business rules and domain models (orders, tickets, users, policies). -
orchestration/— Intent routing, execution planning, ambiguity resolution, and confidence scoring. -
agents/— Agent implementations grouped by capability (intent, knowledge, orders, tickets, escalation). Each agent containsagent.py,prompts.py, andschemas.py. -
execution/— Dispatcher, validators, retries, circuit breaker, external tool wrappers (Shopify, helpdesk, notifications) and workflow handlers. -
knowledge/— Document ingestion, chunking, embeddings, retrieval, vector store adapters, and freshness rules. -
llm/— LLM routing, fallbacks, provider integrations, prompts, and guardrails (json validation, content filtering, retry/degradation strategies). -
memory/— Session/context memory manager, store, summarizer, and validators. -
observability/— Logger, tracer, metrics, cost tracker, and alerting helpers. -
events/— Optional event publisher/consumer layer and event schemas. -
config/— YAML configs for LLMs, RAG, tools, and rollout flags. -
scripts/— Helpful CLI scripts (ingest docs, rebuild index, backfill embeddings, chaos tests). -
tests/— Unit, integration, e2e and chaos test suites. -
docs/— Architecture, API, deployment, runbooks and failure scenarios.
Design notes & conventions
-\tKeep domain/ pure: no IO, only deterministic business logic and policies.
-\tAgents are small, single-responsibility units: they take structured inputs and return structured outputs.
-\tExecution layer handles retries, circuit-breaking, and idempotency.
-\tKnowledge layer enforces freshness rules before serving RAG results.
-\tLLM calls are routed through llm/router.py to allow provider failover and guardrails.
-\tUse pydantic models (app/schemas) for all external and internal interfaces.
-\tAdd observability hooks in entrypoints (fastapi middleware, dispatcher, and agent runners).
Next steps / TODO
- Implement concrete LLM provider connectors in
llm/providers/. - Wire agents into
orchestration/router.pyand buildexecution/dispatcher.pyflows. - Add test coverage under
tests/unitfor domain logic andtests/integrationfor end-to-end flows. - Configure CI to run linting and tests.