A modern, full-stack AI chatbot application featuring a React/TypeScript frontend with Material-UI and a Python FastAPI backend powered by OpenAI's GPT-3.5-turbo model. The application supports real-time streaming responses via WebSocket connections and includes conversation management with a sidebar for organizing multiple chat sessions.
- Real-time Streaming: Messages stream from the AI as they're generated for a natural conversation flow
- WebSocket Communication: Efficient real-time bidirectional communication between frontend and backend
- Dark/Light Mode: Toggle between dark and light themes with a single click
- Responsive Design: Clean, modern UI built with Material-UI components
- Auto-scroll: Automatically scrolls to the latest message in the conversation
- Multiple Conversations: Create and manage multiple chat sessions
- Conversation Sidebar: Easy navigation between different chat threads
- Server Persistence: Conversations and messages are stored in the FastAPI backend so you can resume any thread from any browser
- Quick Access: Click on any saved conversation to reload it from the server
- Type-Safe: Full TypeScript implementation on the frontend
- Error Handling: Comprehensive error handling with user-friendly notifications
- Reconnection Logic: Automatic WebSocket reconnection with exponential backoff and per-request correlation IDs
- Streaming Protocol: WebSocket payloads are structured JSON chunks that include request IDs and conversation IDs, so concurrent prompts never leak tokens across sessions
- Security First: Guest sessions obtain signed JWTs from
/auth/guest, every API/WebSocket call validates the token, and secrets/configuration are centrally managed through a typed Pydantic settings layer - Persistence Ready: The backend defaults to SQLite for local development but can be pointed at PostgreSQL/MySQL by changing
DATABASE_URL - CORS Enabled: Backend configured to accept requests from frontend
- Structured Logging & Health Checks: JSON logs plus
/healthallow production observability and readiness probes out of the box
π New to production systems? Read
docs/PRODUCTION_GUIDE.mdfor a decision-by-decision explanation of how each feature works and why it exists.
The project consists of two main components:
CoDHeChat/
βββ backend/ # Python FastAPI server
β βββ main.py # WebSocket server and OpenAI integration
β βββ pyproject.toml # Python dependencies (uv package manager)
β βββ .python-version # Python version specification
β
βββ my-chatbot/ # React TypeScript frontend
βββ src/
β βββ components/ # React components
β β βββ ChatContainer.tsx # Main chat interface
β β βββ MessageList.tsx # Message display
β β βββ Message.tsx # Individual message
β β βββ MessageInput.tsx # User input field
β βββ services/ # API communication layer
β β βββ chatService.ts # WebSocket manager
β βββ types/ # TypeScript type definitions
β β βββ chat.ts
β βββ theme/ # Material-UI theme configuration
β β βββ theme.ts
β βββ hooks/ # Custom React hooks
β β βββ useChatScroll.ts
β βββ App.tsx # Root component
β βββ main.tsx # Application entry point
βββ package.json
- User Input β User types message in
MessageInputcomponent - State Update β
ChatContaineradds user message to state immediately - WebSocket Send β Message sent to backend via WebSocket connection
- OpenAI Processing β Backend streams response from GPT-3.5-turbo
- Real-time Updates β Frontend receives and displays partial responses as they arrive
- Completion β Final message displayed and conversation auto-saved
Before you begin, ensure you have the following installed:
- Python 3.13+ - Download Python
- Node.js 18+ - Download Node.js
- npm or yarn - Comes with Node.js
- uv (Python package manager) - Install uv
- OpenAI API Key - Get API Key
git clone https://github.com/CodeHalwell/CoDHeChat.git
cd CoDHeChat# Navigate to backend directory
cd backend
# Install dependencies using uv
uv sync
# Create a .env file and add your OpenAI API key
echo "OPENAI_API_KEY=your_api_key_here" > .env# Navigate to frontend directory
cd ../my-chatbot
# Install dependencies
npm install
# (Optional) Create .env file for custom API URL
echo "VITE_API_URL=http://localhost:8000" > .envYou need to run both the backend and frontend servers simultaneously.
cd backend
uv run uvicorn main:app --host 0.0.0.0 --port 8000 --reloadThe backend will start on http://localhost:8000
cd my-chatbot
npm run devThe frontend will start on http://localhost:5173 (or another port if 5173 is occupied)
Open your browser and navigate to http://localhost:5173
Copy backend/.env.example to backend/.env and update the values:
SECRET_KEY=replace-me
OPENAI_API_KEY=sk-...your-key-here...
DATABASE_URL=sqlite:///./chat.db
MODEL_NAME=gpt-4o-mini
LOG_LEVEL=INFO
LOG_JSON=trueSECRET_KEYβ required for signing JWT access tokensOPENAI_API_KEYβ required for real responses (tests override the chat service so they run offline)DATABASE_URLβ defaults to SQLite but accepts any SQLAlchemy connection stringMODEL_NAMEβ OpenAI model to call when generating responsesLOG_LEVELβ log verbosity (DEBUG,INFO, etc.)LOG_JSONβ set tofalselocally for human-friendly plain text logs
Configure the frontend via .env file in the my-chatbot directory:
# API endpoint (default: http://localhost:8000)
VITE_API_URL=http://localhost:8000- Start a Chat: The frontend automatically provisions a short-lived guest token from
/auth/guest, opens an authenticated WebSocket, and fetches your existing conversations - Send Messages: Type a prompt and press Enter or click Send; the UI optimistically renders your message while the backend persists it
- Watch Responses Stream: Each WebSocket chunk updates the assistant message in place via a structured
{requestId, conversationId, content}payload - Create New Conversations: Click "+ New Conversation" in the sidebar to clear the panel; the server assigns a real conversation ID as soon as the next message is processed
- Switch Between Chats: Click on any conversation in the sidebar to retrieve its entire history from the backend
- Toggle Theme: Click the sun/moon icon in the top-right corner (the chosen mode is persisted to localStorage)
cd backend
# Run with auto-reload for development
uv run uvicorn main:app --reload
# Run tests (if available)
uv run pytestcd my-chatbot
# Start development server
npm run dev
# Run linter
npm run lint
# Build for production
npm run build
# Preview production build
npm run preview- Backend: Python code follows standard conventions, uses type hints
- Frontend: TypeScript with ESLint, React hooks patterns
- Components: Functional components with TypeScript interfaces
- API Keys: Never commit
.envfiles or API keys to version control - CORS: The current configuration allows all origins (
allow_origins=["*"]) - this is only suitable for development - Production: Before deploying to production:
- Set specific allowed origins in CORS configuration
- Add rate limiting to prevent API abuse
- Implement authentication/authorization
- Use HTTPS for all communications
- Add input validation and sanitization
- Consider implementing token usage limits
Problem: ModuleNotFoundError: No module named 'fastapi'
- Solution: Make sure you've run
uv syncin the backend directory
Problem: openai.AuthenticationError
- Solution: Check that your
OPENAI_API_KEYis correctly set in the.envfile
Problem: WebSocket connection failed
- Solution: Ensure the backend server is running on port 8000
Problem: Failed to fetch or connection errors
- Solution: Check that the backend is running and the
VITE_API_URLis correct
Problem: Module not found errors
- Solution: Run
npm installto ensure all dependencies are installed
Problem: WebSocket won't connect
- Solution: Check browser console for errors, ensure CORS is properly configured
- FastAPI: Modern Python web framework for building APIs
- OpenAI SDK: Official Python SDK for OpenAI API
- Uvicorn: Lightning-fast ASGI server
- python-dotenv: Environment variable management
- WebSockets: Real-time bidirectional communication
- React 19: Latest React with modern hooks
- TypeScript: Type-safe JavaScript
- Material-UI (MUI): Comprehensive React UI component library
- Vite: Next-generation frontend build tool
- Emotion: CSS-in-JS styling
- Axios: Promise-based HTTP client
This is a learning project, but contributions and suggestions are welcome! Feel free to:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is open source and available for educational purposes.
- OpenAI for providing the GPT API
- Material-UI team for the excellent component library
- FastAPI community for the amazing web framework
- The React and TypeScript communities
Project Link: https://github.com/CodeHalwell/CoDHeChat
Note: This is a learning project created to explore full-stack development with modern web technologies. It demonstrates integrating AI capabilities into a web application with real-time communication.