A teaching-oriented full-stack sample that pairs a Python Quart backend with a React + FluentUI frontend, real-time Server-Sent Events (SSE), and Playwright tests.
- Shows how to keep REST and MCP JSON-RPC in a single Quart process
- Demonstrates “Grokking Simplicity” (actions vs. calculations vs. data) and “A Philosophy of Software Design” (deep modules)
- Provides an approachable playground for FluentUI v9, Vite, and Playwright
- Backend: Quart, Pydantic 2, MCP JSON-RPC, Async SSE (
backend/app.py) - Business logic:
TaskService+ models inbackend/tasks.py - LLM Integration: Ollama with local models (
backend/ollama_service.py) - Frontend: React 18, Vite, FluentUI components, feature-first structure under
frontend/src/features - Tests: Playwright E2E (
tests/e2e/app.spec.js,tests/e2e/ollama.spec.js)
All deep-dive guides now live under docs/ for easier discovery:
- Ubuntu Installation Guide – complete prerequisites installation for Ubuntu 22.04 LTS
- Quick Start – fastest path from clone to running servers
- Learning Guide – principles behind the architecture and code style
- Project Structure – file-by-file overview of the repo
- Pydantic Architecture – how models, validation, and operations fit together
- Unified Architecture – REST + MCP integration details and extension ideas
- Troubleshooting – common issues and fixes for setup, dev, and tests
- Clone the repo:
git clone <your-fork-url> && cd python-quart-vite-react - Run the automated bootstrap:
./setup.sh(creates the repo-level.venv, installs frontend deps, installs Playwright, checks for Ollama) - (Optional) Install Ollama for LLM features:
curl -fsSL https://ollama.com/install.sh | sh && ollama pull llama3.2:1b - Start all servers:
./start-dev.sh(or) use the VS Code "Full Stack: Backend + Frontend" launch config - Open
http://localhost:3001, switch to the Tasks tab, and create a task—the backend and frontend are now synced - (Optional) Test Ollama integration:
curl -X POST http://localhost:5001/api/ollama/chat -H "Content-Type: application/json" -d '{"messages":[{"role":"user","content":"Say hello"}]}' - (Optional) Run the Playwright suite from the repo root:
npm run test:e2e
- Python 3.10+
python3 -m venv .venvsource .venv/bin/activatepip install -r requirements.txt
- Node.js 18+
cd frontend && npm install
npm install # installs Playwright runner
npx playwright install chromiumDebian/Ubuntu users may also need
npx playwright install-depsfor browser libs.
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull the lightweight model
ollama pull llama3.2:1b
# Verify installation
ollama listThe app works without Ollama, but LLM endpoints (/api/ollama/*) will return 503 errors. For production use, consider:
- llama3.2:1b (~1.3GB) — Fast, good for testing and simple tasks
- llama3.2:3b (~2GB) — Better quality, still fast
- qwen2.5:3b (~2GB) — Alternative with strong performance
The
setup.shscript checks for Ollama and provides installation instructions if not found.
- Backend:
source .venv/bin/activate && cd backend && python app.py→ serves REST + MCP onhttp://localhost:5001 - Frontend:
cd frontend && npm run dev→ launches Vite dev server onhttp://localhost:3001 - Ollama (optional):
ollama serve→ runs LLM server onhttp://localhost:11434
./start-dev.sh (verifies dependencies, starts backend + frontend + Ollama if available, stops all on Ctrl+C)
Use the “Full Stack: Backend + Frontend” launch config to start backend + frontend with attached debuggers.
- Visit
http://localhost:3001 - Dashboard tab should show a ticking clock (SSE via
/api/time-stream) - Tasks tab should show three sample tasks (seeded by
TaskService.initialize_sample_data()) - Create a task, mark it complete, delete it—confirm state updates instantly
Need everything in a single container? The repo now includes a multi-stage Dockerfile that builds the Vite frontend, copies the static assets next to the Quart app, and serves everything through Hypercorn on port 5001.
docker build -t quart-react-demo .
docker run --rm -p 5001:5001 quart-react-demo- The container exposes only the backend port; the frontend is served by Quart from the built assets, so open
http://localhost:5001. - Set
-e FRONTEND_DIST=/custom/pathif you mount a different build output at runtime. - Hot reloading is not part of the container flow—use the regular dev servers for iterative work and Docker for demos or deployment.
- Dashboard tab: Streams
{"time","date","timestamp"}via EventSource; connection errors show inline. - Tasks tab: Uses FluentUI
DataGrid+ dialogs;frontend/src/features/tasks/TaskList.jsxkeeps calculations (getTaskStats) separate from actions (API calls). - About tab: Summarizes tech choices and linkable resources.
- Ollama API (backend only):
POST /api/ollama/chat— Chat with local LLM (supports conversation history)GET /api/ollama/models— List available models- Also exposed via MCP tools:
ollama_chat,list_ollama_models
-
Shows how to keep REST and MCP JSON-RPC in a single Quart process
-
Demonstrates “Grokking Simplicity” (actions vs. calculations vs. data) and “A Philosophy of Software Design” (deep modules)
-
Provides an approachable playground for FluentUI v9, Vite, and Playwright ↓ TaskService + Pydantic models (backend/tasks.py)
-
Backend: Quart, Pydantic 2, MCP JSON-RPC, Async SSE (
backend/app.py) -
Business logic:
TaskService+ models inbackend/tasks.py -
Frontend: React 18, Vite, FluentUI components, feature-first structure under
frontend/src/features -
Tests: Playwright E2E (
tests/e2e/app.spec.js) -
TaskServicemethods are “deep”: they validate, mutate_tasks_db, and returnTaskmodels—no need for extra helpers. -
Frontend features live under
frontend/src/features/*, each with their own state, calculations, and FluentUI layout; all network requests go throughfrontend/src/services/api.js(fetchJSONcentralizes error handling).
- Clone the repo:
git clone <your-fork-url> && cd python-quart-vite-react - Run the automated bootstrap:
./setup.sh(creates the repo-level.venv, installs frontend deps, installs Playwright) - Start both servers:
./start-dev.sh(or) use the VS Code “Full Stack: Backend + Frontend” launch config - Open
http://localhost:3001, switch to the Tasks tab, and create a task—the backend and frontend are now synced - (Optional) Run the Playwright suite from the repo root:
npm run test:e2e
- Each tool schema is auto-generated from the
@operationsignature + Pydantic models—change it once, both REST and MCP update.
- Start servers (script or terminals)
- Python 3.10+
python3 -m venv .venvsource .venv/bin/activatepip install -r requirements.txt
| Command | Purpose |
|---|---|
npm run test:e2e |
Run all Playwright E2E tests |
npm run test:e2e:ui |
Run tests in interactive UI mode |
npm run test:e2e:report |
View test results report |
npm run ollama:pull |
Download llama3.2:1b model |
npm run ollama:start |
Start Ollama server manually |
npm run ollama:status |
Check if Ollama is running |
# List available models
curl http://localhost:5001/api/ollama/models
# Simple chat
curl -X POST http://localhost:5001/api/ollama/chat \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "What is Python?"}
],
"model": "llama3.2:1b",
"temperature": 0.7
}'
# Conversation with history
curl -X POST http://localhost:5001/api/ollama/chat \
-H "Content-Type: application/json" \
-d '{
"messages": [
{"role": "user", "content": "My name is Alice"},
{"role": "assistant", "content": "Nice to meet you, Alice!"},
{"role": "user", "content": "What is my name?"}
],
"model": "llama3.2:1b"
}'
# Via MCP JSON-RPC
curl -X POST http://localhost:5001/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "ollama_chat",
"arguments": {
"messages": [{"role": "user", "content": "Hello!"}]
}
},
"id": 1
}'- Node.js 18+
cd frontend && npm install
The repo includes comprehensive E2E tests using Playwright:
# Run all tests
npm run test:e2e
# Interactive mode with UI
npm run test:e2e:ui
# Run specific test file
npx playwright test tests/e2e/app.spec.js --project=chromium
# View last test report
npm run test:e2e:reportTest suites:
tests/e2e/app.spec.js— Dashboard, tasks, SSE streamingtests/e2e/ollama.spec.js— LLM chat, model listing, validation (requires Ollama)
Tests rely on:
- Sample tasks being present
- Stable
data-testidattributes in the React components - SSE payload shape
{ time, date, timestamp } - Ollama running on
localhost:11434withllama3.2:1bmodel (for Ollama tests)
- Backend:
source .venv/bin/activate && cd backend && python app.py→ serves REST + MCP onhttp://localhost:5001 - Frontend:
cd frontend && npm run dev→ launches Vite dev server onhttp://localhost:3001
| Issue | Fix |
|---|---|
| Port 5001 in use | sudo lsof -i :5001 then kill the process (macOS uses 5000 for AirPlay, so backend defaults to 5001) |
source .venv/bin/activate fails |
Recreate the env: rm -rf .venv && python3 -m venv .venv && pip install -r backend/requirements.txt |
npm install errors |
npm cache clean --force && rm -rf node_modules package-lock.json && npm install |
| Playwright browser install fails | sudo npx playwright install-deps && npx playwright install |
| Ollama not found | Install: curl -fsSL https://ollama.com/install.sh | sh then ollama pull llama3.2:1b |
| Ollama connection error | Start server: ollama serve or check if running: curl http://localhost:11434/api/tags |
| LLM responses are slow | Try a smaller model (llama3.2:1b is fastest) or ensure GPU acceleration is enabled |
See docs/TROUBLESHOOTING.md for more detailed solutions.
- Add a
priorityfield to the Pydantic models + UI - Extend the SSE stream to broadcast task stats (remember to update
connectToTimeStreamconsumers) - Persist data with SQLite or Postgres instead of
_tasks_db - Add more Playwright specs (filters, SSE error handling, MCP flows)
- Build a chat UI: Create
frontend/src/features/ollama/OllamaChat.jsxwith FluentUI components and connect to/api/ollama/chat - Smart task descriptions: Use Ollama to auto-generate task descriptions from titles
- Task summarization: Summarize completed tasks using LLM
- Multi-model comparison: Let users select different Ollama models and compare responses
Happy coding! 🎉