A full-stack AI-powered customer support chat widget that simulates a live support agent for an e-commerce store. The application features a real-time chat interface where users can ask questions (e.g., return policies, shipping) and receive instant, context-aware responses from an LLM.
- Live Chat Interface: Clean, responsive UI built with React.
- AI-Powered Responses: Integrated with Groq (Llama 3 / Mixtral) for fast, intelligent replies.
- Conversation History: Persists chat sessions and messages in PostgreSQL.
- Robust Backend: Node.js & Express with TypeScript.
- Type Safety: Full TypeScript support across frontend and backend.
- Frontend: React, Vite, TypeScript, Generic UI Components.
- Backend: Node.js, Express, TypeScript.
- Database: PostgreSQL.
- Caching: Redis.
- AI/LLM: Groq SDK (compatible with OpenAI/Anthropic APIs).
- Node.js (v18+)
- PostgreSQL installed and running
- A Groq API Key (or OpenAI compatible key if code is adapted)
Create a PostgreSQL database and run the following SQL to set up the schema:
CREATE TABLE conversations (
id TEXT PRIMARY KEY,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE messages (
id SERIAL PRIMARY KEY,
conversation_id TEXT REFERENCES conversations(id),
sender VARCHAR(10),
text TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);- Navigate to the backend directory:
cd backend - Install dependencies:
npm install
- Create a
.envfile inbackend/with your configuration:PORT=4000 DATABASE_URL=postgres://user:password@localhost:5432/your_database_name GROQ_API_KEY=gsk_your_groq_api_key_here REDIS_URL=redis://localhost:6379
- Build and start the server:
Or for development with hot-reload:
npm run build npm start
npm run dev
- Navigate to the frontend directory:
cd frontend - Install dependencies:
npm install
- Create a
.envfile infrontend/(if using env vars) or ensure yourvite.config.ts/api.tsis configured. For this project, you can set the backend URL:VITE_API_URL=http://localhost:4000
- Start the development server:
npm run dev
- Open http://localhost:5173 to view the chat widget.
(May take some time to load on first request as I am using render's free plan)
- Structure: Layered architecture separating concerns.
routes/: API endpoints (e.g.,/chat/message). Handles HTTP request/response validation.services/: Core business logic.llm.ts: Handles communication with the AI provider (Groq). Includes prompts and history context management.db.ts: Manages PostgreSQL connection pool.
config.ts: Centralized configuration and environment variable validation.
- Design Decisions:
- Stateless API: Usage of
sessionIdallows the backend to remain stateless, scaling easily. - Implicit Schema: Tables are queried directly for simplicity in this take-home scope, but designed to be easily migrated to an ORM like Drizzle or Prisma.
- Production Ready: TypeScript compilation to
dist/ensures type safety and optimized runtime.
- Stateless API: Usage of
- Structure: Vite + React application.
- Chat Component: Manages chat state, auto-scrolling, and optimistic UI updates for a snappy feel.
- API Integration: Connects to the backend REST API (
/chat/message).
- Provider: Groq (using Llama 3 8b or Mixtral 8x7b models).
- Why Groq?: Extremely low latency inference, providing a "real-time" chat feel essential for customer support agents.
- Prompting:
- Uses a system prompt to define the persona ("Helpful e-commerce support agent").
- Injects conversation history to maintain context (multi-turn conversations).
- Includes "Domain Knowledge" (shipping policy, returns, etc.) directly in the system prompt for reliable answers.
- Database: Currently using raw SQL queries for simplicity. In a real production app, I would use an ORM (Prisma/Drizzle) for type-safe database access and migration management.
- Security: Basic CORS and input validation are implemented. Production would require stricter Content Security Policy, and Authentication.
- Styling: Uses basic CSS/styled components. Could be enhanced with Tailwind CSS or a UI library for better theming.
- Modularization: We can make the codebase more modular, efficient and optimized by utilizing best code practices.
- Testing: Adding unit and integration tests (backend and frontend) would increase confidence and prevent regressions.
- Performance Optimization: Caching prompt context with Redis or implementing streaming responses would improve responsiveness and lower LLM cost.