A modern full-stack chatbot application built with FastAPI backend using Ollama and the Qwen language model, paired with a React + TypeScript frontend. This project demonstrates how to create a professional AI-powered chat interface with local LLM inference.
- FastAPI Backend: High-performance async web framework
- Ollama Integration: Local LLM inference with Qwen model
- CORS Support: Ready for frontend integration
- Error Handling: Comprehensive error handling and HTTP status codes
- Environment Configuration: Flexible configuration via environment variables
- API Documentation: Auto-generated OpenAPI/Swagger documentation
- React 19: Latest React version with modern hooks and features
- TypeScript: Full type safety and excellent developer experience
- Vite: Lightning-fast development server and build tool
- Real-time Chat: Interactive chat interface with message history
- Local Persistence: Chat history saved to localStorage
- Professional UI: Modern, responsive design with animations
- Loading States: Visual feedback during API calls
chatbot-demo/
βββ README.md # Main project documentation
βββ .gitignore # Git ignore rules for full stack
βββ doc/
β βββ chatbot-demo.gif # Demo screenshot/video
βββ chatbot-backend/ # FastAPI backend
β βββ main.py # FastAPI application
β βββ requirements.txt # Python dependencies
β βββ test.http # API test cases
β βββ README.md # Backend documentation
β βββ __pycache__/ # Python cache (ignored)
βββ chatbot-frontend/ # React frontend
βββ src/
β βββ App.tsx # Main chat component
β βββ App.css # Chat interface styles
β βββ main.tsx # React app entry point
β βββ index.css # Global styles
βββ public/
β βββ vite.svg # Vite logo
βββ package.json # Node.js dependencies
βββ index.html # HTML template
βββ vite.config.ts # Vite configuration
βββ tsconfig.json # TypeScript configuration
βββ README.md # Frontend documentation
Before running this project, ensure you have:
- Python 3.8+ installed
- Node.js 18+ installed
- npm or yarn package manager
- Ollama installed and running
- Git (for cloning the repository)
git clone <your-repo-url>
cd chatbot-demo
# Navigate to backend directory
cd chatbot-backend
# Create virtual environment (recommended)
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Navigate to frontend directory (from project root)
cd chatbot-frontend
# Install dependencies
npm install
# or
yarn install
Install Ollama from ollama.ai and pull the Qwen model:
# Start Ollama service
ollama serve
# In another terminal, pull the Qwen model
ollama pull qwen2.5
Create a .env
file in the chatbot-backend/
directory:
OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=qwen2.5
cd chatbot-backend
uvicorn main:app --reload --host 0.0.0.0 --port 8000
The backend will start at http://localhost:8000
In a new terminal:
cd chatbot-frontend
npm run dev
# or
yarn dev
The frontend will start at http://localhost:5173
- Frontend: Open
http://localhost:5173
in your browser - Backend API Docs: Visit
http://localhost:8000/docs
for Swagger UI - Health Check: Visit
http://localhost:8000/
for server status
Use the provided HTTP test file:
- Install the "REST Client" extension in VS Code
- Open
chatbot-backend/test.http
- Click "Send Request" above any test case
Available test cases:
- Health check
- Basic conversation
- Technical questions
- Code generation requests
- Error handling (empty messages)
- Long message handling
- Open
http://localhost:5173
- Type messages in the chat interface
- Verify message persistence by refreshing the page
- Test responsive design on different screen sizes
GET http://localhost:8000/
Response:
{
"status": "ok",
"model": "qwen2.5",
"ollama_url": "http://localhost:11434"
}
POST http://localhost:8000/chat
Content-Type: application/json
{
"user_message": "Your message here"
}
Response:
{
"bot_response": "AI response here"
}
- Professional Design: Modern gradient background with card-based chat container
- Message Bubbles: Distinct styling for user, bot, and error messages
- Animations: Smooth message appearance with slide-in effects
- Loading Indicators: Animated dots while AI processes requests
- Auto-scroll: Automatically scrolls to latest messages
- Empty State: Welcoming message when chat is empty
- Responsive Design: Works on desktop, tablet, and mobile devices
- Keyboard Support: Enter key to send messages
- Message Persistence: Chat history saved across browser sessions
- Clear Chat: Option to clear conversation history
- Input Validation: Prevents sending empty messages
- Error Handling: User-friendly error messages
- Screen Reader Support: Proper ARIA labels and semantic HTML
- Keyboard Navigation: Full keyboard accessibility
- Reduced Motion: Respects user's motion preferences
- Focus Management: Clear focus indicators
- Color Contrast: High contrast for readability
Environment variables for the FastAPI backend:
Variable | Default | Description |
---|---|---|
OLLAMA_URL |
http://localhost:11434 |
Ollama server URL |
OLLAMA_MODEL |
qwen2.5 |
Ollama model name |
The frontend communicates with the backend at http://localhost:8000
. To change this, modify the fetch URL in chatbot-frontend/src/App.tsx
:
const response = await fetch('http://localhost:8000/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ user_message: userMessage.text }),
});
The backend accepts requests from:
http://localhost:5173
(Vite React dev server)http://localhost:3000
(Alternative React dev server)
-
Ollama service unavailable (503 error)
# Ensure Ollama is running ollama serve # Check if the model exists ollama list
-
Model not found
# Pull the required model ollama pull qwen2.5
-
Backend port conflicts
# Change the port uvicorn main:app --port 8001
-
Frontend build errors
# Clear cache and reinstall rm -rf node_modules package-lock.json npm install
-
CORS errors from frontend
- Verify backend CORS origins include your frontend URL
- Check browser console for specific CORS errors
Enable debug logging for the backend:
uvicorn main:app --reload --log-level debug
- Backend: Remove debug flags and set production environment variables
- Frontend: Build optimized static files
cd chatbot-frontend npm run build
- Backend: Deploy to cloud services (AWS, Azure, GCP) with Docker
- Frontend: Deploy to static hosting (Netlify, Vercel, GitHub Pages)
- Full Stack: Use container orchestration (Docker Compose, Kubernetes)
Create Dockerfile
for backend:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
- Fork the repository
- Create a feature branch:
git checkout -b feature-name
- Make your changes in both backend and frontend as needed
- Test thoroughly:
- Backend: Use
test.http
file - Frontend: Test in browser with various scenarios
- Backend: Use
- Run linting:
- Backend:
flake8
orblack
- Frontend:
npm run lint
- Backend:
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- FastAPI for the excellent web framework
- Ollama for local LLM inference
- Qwen for the language model
- React for the frontend framework
- Vite for the build tool
- TypeScript for type safety
- FastAPI Documentation
- Ollama Documentation
- React Documentation
- Vite Documentation
- TypeScript Handbook
Happy Coding! π