A minimal full-stack chatbot starter template with a FastAPI backend (powered by Cloudflare Workers AI) and a React + Vite frontend.
The backend exposes a simple /chat API, and the frontend provides a clean chat interface for interacting with the bot. Lightweight, extendable, and perfect for swapping in other LLMs, RAG systems, or NLP pipelines.
Live Demo: https://basic-chatbot-xi.vercel.app
Basic-Chatbot/
│
├── chatbot-backend/
│ ├── main.py # FastAPI app defining /chat endpoint
│ ├── requirements.txt # Python dependencies
│ └── __pycache__/
│
├── chatbot-frontend/
│ ├── public/ # Static assets
│ ├── src/ # React components, CSS, image assets
│ │ ├── App.jsx # Main chat UI component
│ │ ├── App.css
│ │ ├── index.css
│ │ └── main.jsx # React entry point
│ ├── index.html # HTML shell
│ ├── package.json
│ └── package-lock.json
│
├── README.md # Documentation
└── .gitignore
cd chatbot-backend
pip install -r requirements.txtCreate a .env file inside chatbot-backend/:
CLOUDFLARE_API_KEY=your_cloudflare_api_token_here
CLOUDFLARE_ACCOUNT_ID=your_cloudflare_account_id_here
Get a free API token (with Workers AI permission) and find your Account ID at
Cloudflare Dashboard. The default model is
@cf/meta/llama-3.1-8b-instruct.
uvicorn main:app --reloadBackend will run on:
http://127.0.0.1:8000
curl -X POST "http://127.0.0.1:8000/chat" \
-H "Content-Type: application/json" \
-d '{"user_message": "hello"}'cd chatbot-frontend
npm install
npm run devThe dev server runs on http://localhost:5173.
The frontend reads VITE_API_BASE_URL from the environment and falls back to
http://localhost:8000. To point at a different backend, create
chatbot-frontend/.env:
VITE_API_BASE_URL=http://127.0.0.1:8000
The base URL is used in src/App.jsx:
const API_BASE_URL =
import.meta.env.VITE_API_BASE_URL || "http://localhost:8000";- Extremely lightweight
- Easy to extend with ML/LLMs
- Async support
- Fast and deploy-ready
- Component-based UI that's easy to extend
- Fast dev server with HMR
- Tiny build output, deploy-ready (Vercel, Netlify, etc.)
- Easy to swap in additional UI libraries later
- Backend and frontend decoupled
- Seamless future migration (React/Vue frontend, larger backend)
The goal was to create a clean, minimal chatbot architecture that:
✔ Separates backend logic from UI
✔ Provides a predictable API (/chat)
✔ Allows immediate replacement of the bot response with:
- OpenAI / Anthropic / Gemini (swap the SDK in
main.py) - LangChain RAG pipeline
- Local ML models ✔ Supports easy UI redesign or integration into a bigger app ✔ Deployable with minimal configuration
sequenceDiagram
User->>Frontend: Enter message in chat box
Frontend->>Backend: POST /chat { user_message }
Backend->>Cloudflare: chat.completions.create(user_message)
Cloudflare->>Backend: Model response
Backend->>Frontend: JSON { response }
Frontend->>User: Render bot message in UI
Kept intentionally small so it's easy to read and extend.
Each request is independent — no server-side conversation history. The frontend
persists the chat log in localStorage, but the model itself sees only the
current message.
Chat history lives only in the browser → intentional for simplicity.
Necessary because frontend and backend are hosted separately. Allowed origins
are configured in chatbot-backend/main.py.