Skip to content

jeff-gitcode/chatbot-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Chatbot Demo - Full Stack AI Assistant

A modern full-stack chatbot application built with FastAPI backend using Ollama and the Qwen language model, paired with a React + TypeScript frontend. This project demonstrates how to create a professional AI-powered chat interface with local LLM inference.

Chatbot Demo

πŸš€ Features

Backend (FastAPI + Ollama)

  • FastAPI Backend: High-performance async web framework
  • Ollama Integration: Local LLM inference with Qwen model
  • CORS Support: Ready for frontend integration
  • Error Handling: Comprehensive error handling and HTTP status codes
  • Environment Configuration: Flexible configuration via environment variables
  • API Documentation: Auto-generated OpenAPI/Swagger documentation

Frontend (React + TypeScript + Vite)

  • React 19: Latest React version with modern hooks and features
  • TypeScript: Full type safety and excellent developer experience
  • Vite: Lightning-fast development server and build tool
  • Real-time Chat: Interactive chat interface with message history
  • Local Persistence: Chat history saved to localStorage
  • Professional UI: Modern, responsive design with animations
  • Loading States: Visual feedback during API calls

πŸ“ Project Structure

chatbot-demo/
β”œβ”€β”€ README.md                   # Main project documentation
β”œβ”€β”€ .gitignore                  # Git ignore rules for full stack
β”œβ”€β”€ doc/
β”‚   └── chatbot-demo.gif       # Demo screenshot/video
β”œβ”€β”€ chatbot-backend/            # FastAPI backend
β”‚   β”œβ”€β”€ main.py                # FastAPI application
β”‚   β”œβ”€β”€ requirements.txt       # Python dependencies
β”‚   β”œβ”€β”€ test.http              # API test cases
β”‚   β”œβ”€β”€ README.md              # Backend documentation
β”‚   └── __pycache__/           # Python cache (ignored)
└── chatbot-frontend/           # React frontend
    β”œβ”€β”€ src/
    β”‚   β”œβ”€β”€ App.tsx            # Main chat component
    β”‚   β”œβ”€β”€ App.css            # Chat interface styles
    β”‚   β”œβ”€β”€ main.tsx           # React app entry point
    β”‚   └── index.css          # Global styles
    β”œβ”€β”€ public/
    β”‚   └── vite.svg           # Vite logo
    β”œβ”€β”€ package.json           # Node.js dependencies
    β”œβ”€β”€ index.html             # HTML template
    β”œβ”€β”€ vite.config.ts         # Vite configuration
    β”œβ”€β”€ tsconfig.json          # TypeScript configuration
    └── README.md              # Frontend documentation

πŸ“‹ Prerequisites

Before running this project, ensure you have:

  • Python 3.8+ installed
  • Node.js 18+ installed
  • npm or yarn package manager
  • Ollama installed and running
  • Git (for cloning the repository)

πŸ› οΈ Installation

1. Clone the Repository

git clone <your-repo-url>
cd chatbot-demo

2. Set Up Backend (FastAPI + Ollama)

# Navigate to backend directory
cd chatbot-backend

# Create virtual environment (recommended)
python -m venv venv

# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate

# Install dependencies
pip install -r requirements.txt

3. Set Up Frontend (React + TypeScript)

# Navigate to frontend directory (from project root)
cd chatbot-frontend

# Install dependencies
npm install
# or
yarn install

4. Set Up Ollama

Install Ollama from ollama.ai and pull the Qwen model:

# Start Ollama service
ollama serve

# In another terminal, pull the Qwen model
ollama pull qwen2.5

5. Environment Configuration (Optional)

Create a .env file in the chatbot-backend/ directory:

OLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=qwen2.5

πŸš€ Running the Application

1. Start the Backend Server

cd chatbot-backend
uvicorn main:app --reload --host 0.0.0.0 --port 8000

The backend will start at http://localhost:8000

2. Start the Frontend Server

In a new terminal:

cd chatbot-frontend
npm run dev
# or
yarn dev

The frontend will start at http://localhost:5173

3. Access the Application

  • Frontend: Open http://localhost:5173 in your browser
  • Backend API Docs: Visit http://localhost:8000/docs for Swagger UI
  • Health Check: Visit http://localhost:8000/ for server status

πŸ§ͺ Testing

Backend API Testing

Use the provided HTTP test file:

  1. Install the "REST Client" extension in VS Code
  2. Open chatbot-backend/test.http
  3. Click "Send Request" above any test case

Available test cases:

  • Health check
  • Basic conversation
  • Technical questions
  • Code generation requests
  • Error handling (empty messages)
  • Long message handling

Frontend Testing

  1. Open http://localhost:5173
  2. Type messages in the chat interface
  3. Verify message persistence by refreshing the page
  4. Test responsive design on different screen sizes

πŸ“‘ API Documentation

Health Check Endpoint

GET http://localhost:8000/

Response:

{
  "status": "ok",
  "model": "qwen2.5",
  "ollama_url": "http://localhost:11434"
}

Chat Endpoint

POST http://localhost:8000/chat
Content-Type: application/json

{
  "user_message": "Your message here"
}

Response:

{
  "bot_response": "AI response here"
}

🎨 Frontend Features

Chat Interface

  • Professional Design: Modern gradient background with card-based chat container
  • Message Bubbles: Distinct styling for user, bot, and error messages
  • Animations: Smooth message appearance with slide-in effects
  • Loading Indicators: Animated dots while AI processes requests
  • Auto-scroll: Automatically scrolls to latest messages
  • Empty State: Welcoming message when chat is empty

User Experience

  • Responsive Design: Works on desktop, tablet, and mobile devices
  • Keyboard Support: Enter key to send messages
  • Message Persistence: Chat history saved across browser sessions
  • Clear Chat: Option to clear conversation history
  • Input Validation: Prevents sending empty messages
  • Error Handling: User-friendly error messages

Accessibility

  • Screen Reader Support: Proper ARIA labels and semantic HTML
  • Keyboard Navigation: Full keyboard accessibility
  • Reduced Motion: Respects user's motion preferences
  • Focus Management: Clear focus indicators
  • Color Contrast: High contrast for readability

πŸ”§ Configuration

Backend Configuration

Environment variables for the FastAPI backend:

Variable Default Description
OLLAMA_URL http://localhost:11434 Ollama server URL
OLLAMA_MODEL qwen2.5 Ollama model name

Frontend Configuration

The frontend communicates with the backend at http://localhost:8000. To change this, modify the fetch URL in chatbot-frontend/src/App.tsx:

const response = await fetch('http://localhost:8000/chat', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ user_message: userMessage.text }),
});

CORS Configuration

The backend accepts requests from:

  • http://localhost:5173 (Vite React dev server)
  • http://localhost:3000 (Alternative React dev server)

πŸ› Troubleshooting

Common Issues

  1. Ollama service unavailable (503 error)

    # Ensure Ollama is running
    ollama serve
    
    # Check if the model exists
    ollama list
  2. Model not found

    # Pull the required model
    ollama pull qwen2.5
  3. Backend port conflicts

    # Change the port
    uvicorn main:app --port 8001
  4. Frontend build errors

    # Clear cache and reinstall
    rm -rf node_modules package-lock.json
    npm install
  5. CORS errors from frontend

    • Verify backend CORS origins include your frontend URL
    • Check browser console for specific CORS errors

Debug Mode

Enable debug logging for the backend:

uvicorn main:app --reload --log-level debug

πŸš€ Deployment

Production Build

  1. Backend: Remove debug flags and set production environment variables
  2. Frontend: Build optimized static files
    cd chatbot-frontend
    npm run build

Deployment Options

  • Backend: Deploy to cloud services (AWS, Azure, GCP) with Docker
  • Frontend: Deploy to static hosting (Netlify, Vercel, GitHub Pages)
  • Full Stack: Use container orchestration (Docker Compose, Kubernetes)

Docker Support

Create Dockerfile for backend:

FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature-name
  3. Make your changes in both backend and frontend as needed
  4. Test thoroughly:
    • Backend: Use test.http file
    • Frontend: Test in browser with various scenarios
  5. Run linting:
    • Backend: flake8 or black
    • Frontend: npm run lint
  6. Submit a pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • FastAPI for the excellent web framework
  • Ollama for local LLM inference
  • Qwen for the language model
  • React for the frontend framework
  • Vite for the build tool
  • TypeScript for type safety

πŸ“š Additional Resources


Happy Coding! πŸŽ‰

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published