Skip to content

The AI Assistant Backend powers multi-agent reasoning, memory, and automation through a modular API built for real-time intelligence. It serves as the core engine behind Raven DevOps’ AI products and workflows.

License

Notifications You must be signed in to change notification settings

raven-dev-ops/chat-assistant-backend

Repository files navigation

chat-assistant-backend

Status: Production FastAPI + Postgres backend deployed to Cloud Run and fronted by API Gateway. MongoDB has been removed; the Node OpenAuxilium/ service is the only legacy/experimental backend.

Production endpoints (Cloud Run + API Gateway)

  • Public gateway (for frontend/Netlify): https://chat-assistant-backend-gw-3j4dip0k.uc.gateway.dev
  • Backend service (private; use identity token if calling directly): https://chat-assistant-backend-276715928612.us-central1.run.app
  • Frontend env var: set VITE_CHAT_API_BASE (or the legacy alias VITE_ASSISTANT_API_URL) to the gateway URL in the website project.

Tech Stack

  • Language: Python
  • Framework: FastAPI + Uvicorn
  • Database: Postgres (Cloud SQL)
  • AI Provider: OpenAI Chat Completions (configurable) with optional local LLM fallback via LOCAL_LLM_BASE_URL + LOCAL_LLM_MODEL
  • Deployment: Cloud Run container with API Gateway in front; Heroku container workflow is kept for parity
  • Services: web (API) + worker (background jobs)
  • Telemetry: Lightweight logging gated by TELEMETRY_ENABLED (default true). Responses are trimmed to 2-3 sentences for consistency.

Local Development

  1. Copy .env.example to backend/.env and set:
    • PORT=4000
    • DATABASE_URL=postgresql://app_user:password@127.0.0.1:5432/ravdevops
    • OPENAI_API_KEY=... (omit or set to disabled to exercise offline/local modes)
    • OPENAI_TIMEOUT_SECONDS and OPENAI_MAX_RETRIES as needed
    • ADMIN_API_TOKEN=... (required for /admin/* and write operations on /api/{collection})
    • CORS_ALLOW_ORIGINS=http://localhost:5173 plus any preview domains
  2. Install dependencies and run the API:
    cd backend
    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    pip install -r requirements.txt
    uvicorn backend.main:app --reload --host 0.0.0.0 --port 4000
  3. Point the frontend to your local instance by setting VITE_CHAT_API_BASE=http://localhost:4000 in the website .env.
  4. Run the test suite (optional but recommended):
    pytest

CORS configuration

  • Default dev origins: http://localhost:3000, http://localhost:5173 (see backend/config.py).
  • Production setting is currently: CORS_ALLOW_ORIGINS="https://ravdevops.com,https://www.ravdevops.com".
  • If you add preview domains, update CORS_ALLOW_ORIGINS in Cloud Run (or equivalent) to include them.

Docker & Cloud Run / Heroku

  • Build and run locally with Docker:
    docker build -t chat-assistant-backend .
    docker run -p 4000:4000 --env-file backend/.env chat-assistant-backend
  • The same image is used for Cloud Run; Heroku container deployment is retained for parity:
    • heroku stack:set container -a chat-assistant-backend
    • heroku container:push web worker -a chat-assistant-backend
    • heroku container:release web worker -a chat-assistant-backend

Documentation

  • Roadmap: roadmap.md
  • Detailed docs (API, deployment, architecture): docs/
  • Project wiki & additional notes: wiki.md

CI / Automation

  • GitHub Actions runs the test suite on each push and pull request to main (see .github/workflows/tests.yml).
  • CodeQL code scanning is enabled for Python to detect security issues (see .github/workflows/codeql.yml).

Contributing / Pull Requests

  1. Create a feature branch on your clone.
  2. Ensure you have a local env file (see backend/.env, which is ignored by Git).
  3. Make your code changes and run the backend locally (see "Local Development").
  4. Commit your changes with a clear message.
  5. Open a Pull Request (PR) against the main branch, describing:
    • What you changed.
    • How to test it.
    • Any new environment variables or config required.

License

This project is not open source. See LICENSE for details. All rights reserved.

About

The AI Assistant Backend powers multi-agent reasoning, memory, and automation through a modular API built for real-time intelligence. It serves as the core engine behind Raven DevOps’ AI products and workflows.

Resources

License

Stars

Watchers

Forks

Packages

No packages published