The source code for this site was completely generated by GitHub Copilot. I did not write a single line of code for the application.
Vite + React single‑page application with a lightweight Express API relay for Azure OpenAI and a persistent, multi‑session chat UI.
UI / Layout
- Title Bar (logo capable), Menu Bar navigation, Footer
- Contact page (clickable email / phone links)
Chat Experience
- Multi‑session conversation model (sessions persisted in
localStorageunderchat_sessions_v1). - Sidebar “History” list shows sessions that contain at least one user message (greeting‑only sessions remain hidden).
- Resizable, collapsible History sidebar with persisted width and state; when collapsed it becomes a thin rail (not fully hidden) with an expand control and a drag handle.
- Rename (✎) and Delete (✕) actions per visible session.
- “New Chat” creates a fresh conversation and switches to it (does not delete existing history).
- Typing indicator (animated dots) while awaiting backend response.
- Markdown rendering for assistant replies (basic formatting, code blocks, lists, links).
- Scrollable chat window with custom styled scrollbars.
- Local fallback heuristic replies if backend fails / not configured.
Backend / Architecture
- API service under
server/– non‑streaming Chat Completions relay to Azure OpenAI. - Keeps API key & endpoint server‑side; frontend only calls
/api/chat. - Adds a default system prompt if none is provided and trims empty/whitespace messages.
- Optional generation options are included only when set via environment variables (prevents unsupported parameter errors on some models).
- Clean separation: UI can be deployed as static assets; API can scale independently.
Persistence
- Per‑session conversations stored locally (no server storage yet)
- Automatic migration from legacy single history (
chat_history_v1) to sessions on first load
Observability / Debug
- Optional debug logging via
VITE_CHAT_DEBUG=1(logs load/persist events, network calls).
Not Currently Enabled
- Token streaming (server sends the full answer after the model completes; can be reintroduced later).
# 1. Install dependencies (root)
npm install
# 2. Configure environment variables for API (root .env or server/.env)
# Copy the example into server/.env and fill in your Azure values
Copy-Item server/.env.example server/.env
# 3. Run API (terminal 1)
npm run dev:api
# 4. Run UI (terminal 2)
npm run dev
# (Optional) Single command helper (Windows PowerShell) to start API then UI:
npm run dev:fullOpen the printed UI URL (usually http://localhost:5173). The API listens on http://localhost:3000 unless PORT is overridden in .env or server/.env.
| Script | Purpose |
|---|---|
npm run dev |
Start Vite dev server (frontend only) |
npm run dev:api |
Start API service in server/ (Express relay) |
npm run dev:full |
Convenience: launches API then Vite (PowerShell specific) |
npm run build |
Production build (outputs to dist/) |
npm run preview |
Serve the built app locally to test prod bundle |
The API service (server/index.js) exposes POST /api/chat and relays messages (non‑streaming) to Azure OpenAI Chat Completions. It responds with the assistant content as plain text.
Environment variables (set in root .env or server/.env):
AZURE_OPENAI_ENDPOINT=https://<your-resource>.openai.azure.com
AZURE_OPENAI_API_KEY=<key>
AZURE_OPENAI_DEPLOYMENT=<deployment-name>
AZURE_OPENAI_API_VERSION=2024-06-01 # or a newer supported version
PORT=3000 # optional override
AZURE_OPENAI_SYSTEM_PROMPT="You are a helpful, concise assistant..." # optional, default provided
# Optional generation options (only sent if set; omit if your model doesn't support them)
AZURE_OPENAI_MAX_COMPLETION_TOKENS=1024
# Legacy fallback (if you already use it): AZURE_OPENAI_MAX_TOKENS=1024
AZURE_OPENAI_TEMPERATURE=1
AZURE_OPENAI_TOP_P=0.95
AZURE_OPENAI_PRESENCE_PENALTY=0.0
AZURE_OPENAI_FREQUENCY_PENALTY=0.5
If required variables are missing, the endpoint returns a plain text diagnostic; the UI then falls back to heuristic responses.
Security Notes
- Do NOT commit real keys; keep
.env/server/.envout of version control. - Consider adding rate limiting, request auth, and logging (e.g., Application Insights) before production hardening.
State & Persistence
- Sessions stored locally (
localStorage: chat_sessions_v1). - Only sessions with at least one user message appear in the History sidebar.
- The default assistant greeting isn’t sent to the model (it’s UI-only).
- Sidebar preferences are persisted: width (
localStorage: chat_sidebar_w_v1) and collapsed state (localStorage: chat_sidebar_collapsed_v1).
Interaction
- Enter to send; Shift+Enter inserts newline.
- Typing indicator shows while awaiting the API.
- Assistant replies are rendered with lightweight Markdown.
- “New Chat” creates a new conversation without deleting existing history.
- Rename or delete sessions via icon buttons; deletion re-selects a remaining session (or adds a fresh hidden greeting session if all are removed).
- History sidebar
- Drag the vertical handle to resize; width is clamped to a sensible range.
- Click the chevron to collapse into a slim rail; click again to expand.
- Even when collapsed, a narrow rail and the drag handle remain so you can expand by dragging.
- On small screens (<880px), the history converts to a horizontal list and the resizer is hidden for usability.
Recovery & Fallback
- If the API fails (network / config), a heuristic assistant reply is generated client-side.
- Non‑streaming responses (full assistant reply delivered after model completes).
- No server‑side persistence or user identity scoping.
- Sessions stored entirely client-side (cleared by browser storage wipes / different device).
- Streaming (SSE) reintroduction.
- Export / import sessions (JSON download / upload).
- Token usage & cost estimation per session.
- Session search / filter & pinning.
- Backend persistence + authentication (per-user history).
- System / persona prompt presets.
- Syntax highlighting in code blocks.
- Rate limit & error retry strategy.
AIWebsite/
├─ index.html # HTML entry
├─ vite.config.js # Vite configuration
├─ package.json # Dependencies & scripts
├─ server/
│ ├─ index.js # Azure OpenAI relay (non‑streaming)
│ └─ .env.example # Example env vars for server only (optional)
├─ src/
│ ├─ main.jsx # React root
│ ├─ App.jsx # Routing & layout
│ ├─ components/ # MenuBar, TitleBar, Footer
│ ├─ pages/
│ │ ├─ Chat.jsx # Multi‑session chat UI
│ │ └─ Contact.jsx
│ ├─ lib/markdown.js # Lightweight markdown renderer
│ └─ styles.css # Theme & component styles
├─ public/ # Static assets (e.g., logo)
└─ dist/ # Production build output (UI)
| Area | How |
|---|---|
| Branding / Logo | Replace logo.png or update logoSrc usage in App.jsx / TitleBar.jsx. |
| Navigation | Edit links in src/App.jsx (MenuBar props or route list). |
| Theme Colors | Adjust CSS variables at top of src/styles.css. |
| Chat Heuristics | Modify assistantReply() in Chat.jsx for offline fallback behavior. |
| Sessions Sidebar | Adjust filtering / ordering logic in Chat.jsx (e.g., show empty sessions). |
| Backend Logic | Extend server/index.js (add logging, auth, rate limiting). |
| System Prompt | Set AZURE_OPENAI_SYSTEM_PROMPT (or include your own system message in requests). |
| Markdown Rules | Update src/lib/markdown.js to support additional syntax. |
- Build the frontend:
npm run build(outputs static assets indist/). - Deploy
dist/(Azure Static Web Apps, Azure Storage Static Website, CDN, etc.). - Deploy the API service (
server/) separately (Azure App Service, Container Apps, Functions custom handler, etc.). - Configure environment variables (Azure OpenAI credentials) in the API hosting environment.
- For production, either:
- Host API at the same origin under
/api(recommended; use reverse proxy), or - Set
VITE_API_BASE_URLduring UI build (e.g.,https://your-api-host) so the browser fetches the remote API.
- Host API at the same origin under
For production, consider:
- Consider streaming to reduce perceived latency (optional feature).
- Request validation + rate limiting / WAF.
- CORS tightening (restrict to UI origin).
- Observability (App Insights / OpenTelemetry).
- Authentication (user‑scoped histories if backend persistence added).
| Issue | Check |
|---|---|
| No assistant reply | Is API server running (npm run dev:api)? Env vars set in server/.env (or root .env)? Check API console. |
| Always heuristic fallback | Network error or Azure credentials invalid; inspect /api/chat response body. |
| History missing after refresh | Confirm localStorage not blocked. Check chat_sessions_v1 key in DevTools > Application (or Storage) panel. |
| Cannot see new session in sidebar | Sessions appear only after the first user message (design choice). |
| Sidebar disappeared entirely | It’s minimized, not removed. Click the chevron on the left rail or drag the vertical handle to expand. |
| Resizer not visible on mobile | Below ~880px width, the resizer is hidden and the history becomes a horizontal strip. |
| Wrong endpoint error | Ensure endpoint ends with .openai.azure.com (no duplicate trailing slash). |
| Unsupported parameter: max_tokens | Newer Azure models require max_completion_tokens. Set AZURE_OPENAI_MAX_COMPLETION_TOKENS, or unset old vars. The server only sends options you set. |
| Unsupported value: temperature | Some deployments only support the default. Unset AZURE_OPENAI_TEMPERATURE (recommended), or set to a supported value. |
Internal / personal project scaffold. Add a license file if you intend to distribute.
Feel free to iterate—open an issue / add tasks for streaming, persistence, or richer UI.