Fix Vite build entry point, react-window v1/v2 API mismatch, and add Vercel/Cloudflare deploy config#2
Conversation
…onfig Co-authored-by: Krosebrook <214532761+Krosebrook@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This PR prepares the application for production deployment by fixing critical build issues and adding deployment configurations for Vercel and Cloudflare Pages. The changes address three major problems: missing Vite entry point, react-window API version mismatch, and missing service worker in the build output.
Changes:
- Fixed Vite build by adding module script tag to index.html and downgrading react-window from v2 to v1.8.11 for API compatibility
- Added deployment configurations (vercel.json, wrangler.toml, public/_headers) with security headers and optimized caching strategies
- Implemented build optimization through manual code chunking (vendor/ui/markdown/ai) to reduce bundle sizes
- Added development tooling (.nvmrc, .env.example, typecheck/smoke scripts) and updated documentation with deployment guides
Reviewed changes
Copilot reviewed 10 out of 12 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| index.html | Added missing Vite module script tag to load /index.tsx entry point |
| package.json | Added typecheck and smoke scripts, pinned Node >=20, downgraded react-window to ^1.8.11 |
| package-lock.json | Updated dependency lockfile reflecting react-window downgrade and new peer dependencies |
| vite.config.ts | Added manual chunk splitting for vendor/ui/markdown/ai to optimize bundle sizes |
| vercel.json | Added Vercel deployment config with SPA rewrites, security headers, and cache policies |
| wrangler.toml | Added Cloudflare Pages configuration with build output directory |
| public/_headers | Added Cloudflare security headers and CSP-Report-Only policy |
| public/sw.js | Added service worker to public directory so Vite includes it in build output |
| .env.example | Documented GEMINI_API_KEY requirement and security considerations |
| .nvmrc | Pinned Node 20 LTS for consistent environments |
| .gitignore | Explicitly excluded .env files to prevent secret commits |
| README.md | Updated with correct commands, deployment instructions for both platforms, and smoke check procedures |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /** | ||
| * Staff Engineer Architect v4.0 - PWA Service Worker (Workbox Implementation) | ||
| */ | ||
|
|
||
| importScripts('https://storage.googleapis.com/workbox-cdn/releases/6.4.1/workbox-sw.js'); | ||
|
|
||
| if (workbox) { | ||
| const { registerRoute, setCatchHandler } = workbox.routing; | ||
| const { StaleWhileRevalidate, NetworkFirst, NetworkOnly } = workbox.strategies; | ||
| const { CacheableResponsePlugin } = workbox.cacheableResponse; | ||
| const { ExpirationPlugin } = workbox.expiration; | ||
|
|
||
| // 1. Security Critical Endpoints: Network Only | ||
| // PRIORITY: Sensitive endpoints like authentication or moderation must be explicitly excluded from caching. | ||
| // Defined first to ensure precedence over other routes. | ||
| registerRoute( | ||
| ({ url }) => url.pathname.includes('/auth') || url.pathname.includes('/moderation'), | ||
| new NetworkOnly() | ||
| ); | ||
|
|
||
| // 2. Dynamic API Content: Network-First | ||
| // "Use a 'NetworkFirst' strategy, only caching successful responses." | ||
| registerRoute( | ||
| ({ url }) => url.pathname.startsWith('/api/'), | ||
| new NetworkFirst({ | ||
| cacheName: 'dynamic-content', | ||
| networkTimeoutSeconds: 3, // Fallback to cache if network takes longer than 3s | ||
| plugins: [ | ||
| new CacheableResponsePlugin({ | ||
| statuses: [200], // Strictly only cache HTTP 200 OK | ||
| }), | ||
| new ExpirationPlugin({ | ||
| maxEntries: 50, | ||
| maxAgeSeconds: 24 * 60 * 60, // 1 Day | ||
| }), | ||
| ], | ||
| }) | ||
| ); | ||
|
|
||
| // 3. Static Assets: Stale-While-Revalidate | ||
| // Serves from cache for speed, updates in background. | ||
| registerRoute( | ||
| ({ request }) => | ||
| ['style', 'script', 'worker', 'font', 'image'].includes(request.destination), | ||
| new StaleWhileRevalidate({ | ||
| cacheName: 'static-resources', | ||
| plugins: [ | ||
| new CacheableResponsePlugin({ | ||
| statuses: [0, 200], | ||
| }), | ||
| new ExpirationPlugin({ | ||
| maxEntries: 60, | ||
| maxAgeSeconds: 30 * 24 * 60 * 60, // 30 Days | ||
| }), | ||
| ], | ||
| }) | ||
| ); | ||
|
|
||
| // 4. App Shell / Navigation: Stale-While-Revalidate | ||
| registerRoute( | ||
| ({ request }) => request.mode === 'navigate', | ||
| new StaleWhileRevalidate({ | ||
| cacheName: 'pages', | ||
| plugins: [ | ||
| new CacheableResponsePlugin({ | ||
| statuses: [200], | ||
| }), | ||
| ], | ||
| }) | ||
| ); | ||
|
|
||
| // Offline Fallback for Navigation | ||
| setCatchHandler(({ event }) => { | ||
| if (event.request.destination === 'document') { | ||
| return caches.match('/index.html'); | ||
| } | ||
| return Response.error(); | ||
| }); | ||
|
|
||
| console.log('Workbox is loaded'); | ||
| } else { | ||
| console.log('Workbox failed to load'); | ||
| } No newline at end of file |
There was a problem hiding this comment.
The PR adds public/sw.js but the root-level sw.js file still exists (both files have identical content). This creates ambiguity about which file is the source of truth and could lead to maintenance issues where one file is updated but not the other.
The PR description states "Service worker lived at project root, outside Vite's public/ dir, so it was never copied to dist/. Added public/sw.js." This suggests the root sw.js should be removed since it's no longer needed.
Recommendation: Remove the root-level sw.js file and update .gitignore to prevent it from being recreated. Only public/sw.js should exist going forward, as Vite will automatically copy it to dist/ during build.
There was a problem hiding this comment.
Done in fd3babc. Root-level sw.js has been deleted. public/sw.js is now the sole source of truth and Vite copies it to dist/sw.js automatically.
vite.config.ts
Outdated
| markdown: ['react-markdown', 'remark-gfm'], | ||
| ui: ['lucide-react', 'react-window'], |
There was a problem hiding this comment.
The manual chunking configuration groups 'lucide-react' and 'react-window' together in the 'ui' chunk. However, these libraries serve different purposes and have different update frequencies:
- lucide-react is an icon library with frequent updates
- react-window is a virtualization library with infrequent updates
Grouping them together means that any update to lucide-react will invalidate the cache for react-window as well, reducing the effectiveness of browser caching.
Consider separating these into individual chunks or grouping react-window with react-markdown and remark-gfm in a 'rendering' chunk, since they're all related to content rendering. This would improve cache hit rates for users.
| markdown: ['react-markdown', 'remark-gfm'], | |
| ui: ['lucide-react', 'react-window'], | |
| rendering: ['react-markdown', 'remark-gfm', 'react-window'], | |
| ui: ['lucide-react'], |
There was a problem hiding this comment.
Done in fd3babc. react-window is now grouped with react-markdown and remark-gfm in a rendering chunk. ui is now just lucide-react — better cache isolation between the frequently-updated icon library and the stable virtualization library.
vercel.json
Outdated
| { | ||
| "source": "/(.*)", | ||
| "headers": [ | ||
| { "key": "X-Frame-Options", "value": "DENY" }, |
There was a problem hiding this comment.
The security headers are configured globally for all routes including the catch-all rewrite, but the X-Frame-Options: DENY header may be too restrictive for some use cases. If this application needs to be embedded in an iframe (e.g., for documentation, demos, or integration with other tools), the DENY policy will prevent that.
Consider whether this restriction is intentional. If the app should never be embedded, DENY is correct. If selective embedding is needed, consider using X-Frame-Options: SAMEORIGIN instead, or implementing a more granular Content-Security-Policy frame-ancestors directive that allows specific domains.
| { "key": "X-Frame-Options", "value": "DENY" }, | |
| { "key": "X-Frame-Options", "value": "SAMEORIGIN" }, |
There was a problem hiding this comment.
Changed to SAMEORIGIN in fd3babc in both vercel.json and public/_headers. This app has no embedding use case today, but SAMEORIGIN is the right default — it prevents cross-origin embedding while leaving room for same-origin dashboard frames if needed.
| git clone https://github.com/Krosebrook/INT-SysDesignGen.git | ||
| cd INT-SysDesignGen |
There was a problem hiding this comment.
The repository URL has been changed from 'intellinc/se-architect' to 'Krosebrook/INT-SysDesignGen'. This appears to be intentional based on the PR context, but this change should be verified as correct. If this is a fork or repository transfer, ensure that:
- All documentation references are updated consistently
- CI/CD configurations reference the correct repository
- Any hardcoded URLs in the application code are updated
- Package.json name field aligns with the new repository identity (currently shows 'intellinc-architect-2of2' which doesn't match either URL)
public/_headers
Outdated
| X-Content-Type-Options: nosniff | ||
| Referrer-Policy: strict-origin-when-cross-origin | ||
| Permissions-Policy: camera=(), microphone=(), geolocation=() | ||
| Content-Security-Policy-Report-Only: default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.tailwindcss.com https://esm.sh https://storage.googleapis.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com; img-src 'self' data: https://cdn-icons-png.flaticon.com; connect-src 'self' https://generativelanguage.googleapis.com; worker-src 'self' |
There was a problem hiding this comment.
The Content-Security-Policy-Report-Only header allows 'unsafe-inline' for both script-src and style-src, which significantly weakens the CSP protection. While this is currently in report-only mode, the inline scripts in index.html (Tailwind config at line 13-31 and service worker registration at line 74-83) will require 'unsafe-inline' to function.
Several improvements should be considered before moving from report-only to enforcement:
- Move the Tailwind config to an external file or use nonces/hashes for inline scripts
- Move the service worker registration to the main bundle (index.tsx)
- Consider using CSP nonces or hashes instead of 'unsafe-inline' for better security
- The policy allows scripts from cdn.tailwindcss.com but inline Tailwind config already exists - clarify if CDN Tailwind is actually needed
The current report-only approach is good for testing, but a migration plan should be documented to eventually enforce a stricter CSP.
| Content-Security-Policy-Report-Only: default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.tailwindcss.com https://esm.sh https://storage.googleapis.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com; img-src 'self' data: https://cdn-icons-png.flaticon.com; connect-src 'self' https://generativelanguage.googleapis.com; worker-src 'self' | |
| Content-Security-Policy-Report-Only: default-src 'self'; script-src 'self' https://cdn.tailwindcss.com https://esm.sh https://storage.googleapis.com; style-src 'self' https://fonts.googleapis.com; font-src 'self' https://fonts.gstatic.com; img-src 'self' data: https://cdn-icons-png.flaticon.com; connect-src 'self' https://generativelanguage.googleapis.com; worker-src 'self' |
There was a problem hiding this comment.
Done in fd3babc. Removed 'unsafe-inline' from both script-src and style-src in the CSP-Report-Only header. The inline Tailwind config and previously-inline SW registration will now generate violation reports, which is the correct behavior for report-only mode. The SW registration itself was also moved to index.tsx (see comment on line 73) to reduce inline script surface area.
package.json
Outdated
| "preview": "vite preview" | ||
| "preview": "vite preview", | ||
| "typecheck": "tsc --noEmit", | ||
| "smoke": "npm run typecheck && npm run build" |
There was a problem hiding this comment.
The 'smoke' script runs typecheck followed by build, but it doesn't verify that the build output is correct. The README.md suggests checking that 'dist/' contains 'index.html, sw.js, assets/', but this verification is not automated.
Consider enhancing the smoke script to include basic validation of the build output:
- Check that dist/index.html exists and contains the module script tag
- Verify dist/sw.js exists
- Confirm dist/assets/ directory exists and contains expected chunks
- Optionally, run a quick bundle size check to catch unexpected size increases
This would make the smoke test more reliable and catch build configuration issues automatically.
| "smoke": "npm run typecheck && npm run build" | |
| "smoke": "npm run typecheck && npm run build && node --input-type=module -e \"import { readFileSync, statSync, readdirSync } from 'fs'; import { join } from 'path'; const dist = 'dist'; const fail = (msg) => { console.error(msg); process.exit(1); }; try { const indexPath = join(dist, 'index.html'); const html = readFileSync(indexPath, 'utf8'); if (!html.includes('type=\\\"module\\\"')) { fail('dist/index.html missing module script tag (type=\\\"module\\\').'); } const swPath = join(dist, 'sw.js'); statSync(swPath); const assetsDir = join(dist, 'assets'); const files = readdirSync(assetsDir); if (!files.length) { fail('dist/assets is empty.'); } let total = 0; const walk = (dir) => { for (const entry of readdirSync(dir, { withFileTypes: true })) { const p = join(dir, entry.name); if (entry.isDirectory()) { walk(p); } else { total += statSync(p).size; } } }; walk(dist); const mb = (total / 1024 / 1024).toFixed(2); const limitMb = 10; if (total > limitMb * 1024 * 1024) { console.warn('Warning: bundle size ' + mb + 'MB exceeds ' + limitMb + 'MB.'); } else { console.log('Bundle size: ' + mb + 'MB.'); } console.log('Smoke build output validation passed.'); } catch (e) { fail('Smoke validation failed: ' + (e && e.message ? e.message : e)); }\"" |
There was a problem hiding this comment.
Enhanced in fd3babc. The smoke script now runs typecheck && build and then validates the output: checks that dist/index.html, dist/sw.js, and dist/assets/ (non-empty) all exist, then prints the chunk list. The build step itself already fails on any compilation error, so size warnings are covered by the Vite output.
README.md
Outdated
| Utility smoke checker for synthesis output integrity (see `tests/smoke.test.ts`): | ||
|
|
||
| ```javascript | ||
| import { smokeCheck } from './tests/smoke.test.js'; |
There was a problem hiding this comment.
The README section titled "🧪 Testing" describes a utility function 'smokeCheck' from 'tests/smoke.test.ts', but it imports from './tests/smoke.test.js' (note the .js extension). Since the project uses TypeScript (the file is smoke.test.ts), this import path will fail unless:
- The TypeScript compiler is configured to output .js files to a specific location
- Module resolution is configured to automatically resolve .ts to .js
Additionally, there's potential confusion between:
- The 'npm run smoke' script (typecheck + build)
- The smokeCheck() utility function (validates AI-generated content)
Consider either:
- Renaming the npm script to 'npm run prebuild' or 'npm run validate' to avoid confusion
- Clarifying in the documentation that these are two different smoke tests with different purposes
- Fixing the import example to use the correct file path or explaining the build process
| Utility smoke checker for synthesis output integrity (see `tests/smoke.test.ts`): | |
| ```javascript | |
| import { smokeCheck } from './tests/smoke.test.js'; | |
| Utility smoke checker for synthesis output integrity (see `tests/smoke.test.ts`; this is separate from the `npm run smoke` build script): | |
| ```typescript | |
| import { smokeCheck } from './tests/smoke.test.ts'; |
There was a problem hiding this comment.
Fixed in fd3babc. Import path corrected to .ts, code block changed to typescript, and the section now explicitly distinguishes npm run smoke (build validation) from smokeCheck() (AI output content validation).
public/sw.js
Outdated
| importScripts('https://storage.googleapis.com/workbox-cdn/releases/6.4.1/workbox-sw.js'); | ||
|
|
||
| if (workbox) { |
There was a problem hiding this comment.
The service worker loads Workbox from a CDN (https://storage.googleapis.com/workbox-cdn/releases/6.4.1/workbox-sw.js) which introduces a third-party dependency that could fail if the CDN is unavailable. This also poses a security risk since the service worker has elevated privileges and runs in a separate context.
Consider either:
- Bundling Workbox locally during the build process (via npm package 'workbox-build' or 'vite-plugin-pwa')
- Adding Subresource Integrity (SRI) hash to the importScripts() call to ensure the loaded script hasn't been tampered with
- Implementing a fallback mechanism if the CDN script fails to load
Currently, if the Workbox CDN is down or blocked, the entire service worker will fail silently (logging "Workbox failed to load").
| importScripts('https://storage.googleapis.com/workbox-cdn/releases/6.4.1/workbox-sw.js'); | |
| if (workbox) { | |
| let workboxLoaded = false; | |
| try { | |
| // Prefer a locally hosted Workbox bundle if available | |
| importScripts('/workbox-sw.js'); | |
| workboxLoaded = typeof workbox !== 'undefined'; | |
| } catch (e) { | |
| console.error('Failed to load local Workbox script:', e); | |
| } | |
| if (!workboxLoaded) { | |
| try { | |
| // Fallback to Workbox CDN if local script is unavailable | |
| importScripts('https://storage.googleapis.com/workbox-cdn/releases/6.4.1/workbox-sw.js'); | |
| workboxLoaded = typeof workbox !== 'undefined'; | |
| } catch (e) { | |
| console.error('Failed to load Workbox from CDN:', e); | |
| } | |
| } | |
| if (workboxLoaded && typeof workbox !== 'undefined') { |
There was a problem hiding this comment.
Done in fd3babc. importScripts() is now wrapped in a try/catch — if the Workbox CDN fails to load, the error is logged and the typeof workbox !== 'undefined' guard prevents any runtime errors. The service worker degrades gracefully to no caching rather than throwing.
public/sw.js
Outdated
| // 1. Security Critical Endpoints: Network Only | ||
| // PRIORITY: Sensitive endpoints like authentication or moderation must be explicitly excluded from caching. | ||
| // Defined first to ensure precedence over other routes. | ||
| registerRoute( | ||
| ({ url }) => url.pathname.includes('/auth') || url.pathname.includes('/moderation'), | ||
| new NetworkOnly() | ||
| ); | ||
|
|
||
| // 2. Dynamic API Content: Network-First | ||
| // "Use a 'NetworkFirst' strategy, only caching successful responses." | ||
| registerRoute( | ||
| ({ url }) => url.pathname.startsWith('/api/'), | ||
| new NetworkFirst({ | ||
| cacheName: 'dynamic-content', | ||
| networkTimeoutSeconds: 3, // Fallback to cache if network takes longer than 3s | ||
| plugins: [ | ||
| new CacheableResponsePlugin({ | ||
| statuses: [200], // Strictly only cache HTTP 200 OK | ||
| }), | ||
| new ExpirationPlugin({ | ||
| maxEntries: 50, | ||
| maxAgeSeconds: 24 * 60 * 60, // 1 Day | ||
| }), | ||
| ], | ||
| }) | ||
| ); | ||
|
|
There was a problem hiding this comment.
The service worker caches API responses with a NetworkFirst strategy and a 3-second timeout, but this application is described as using a client-side embedded API key for Google Gemini. The routes defined here (/api/, /auth, /moderation) suggest server-side API endpoints, but based on the vite.config.ts and geminiService.ts code, API calls are made directly from the client to generativelanguage.googleapis.com.
These service worker routes may not match the actual network requests being made. Verify that:
- The /api/ route pattern matches actual API calls (currently it appears calls go directly to https://generativelanguage.googleapis.com)
- The /auth and /moderation routes are actually used in this application
- Caching strategies are appropriate for the actual endpoints being called
If these routes are not used, consider removing them to avoid confusion and reduce service worker complexity.
| // 1. Security Critical Endpoints: Network Only | |
| // PRIORITY: Sensitive endpoints like authentication or moderation must be explicitly excluded from caching. | |
| // Defined first to ensure precedence over other routes. | |
| registerRoute( | |
| ({ url }) => url.pathname.includes('/auth') || url.pathname.includes('/moderation'), | |
| new NetworkOnly() | |
| ); | |
| // 2. Dynamic API Content: Network-First | |
| // "Use a 'NetworkFirst' strategy, only caching successful responses." | |
| registerRoute( | |
| ({ url }) => url.pathname.startsWith('/api/'), | |
| new NetworkFirst({ | |
| cacheName: 'dynamic-content', | |
| networkTimeoutSeconds: 3, // Fallback to cache if network takes longer than 3s | |
| plugins: [ | |
| new CacheableResponsePlugin({ | |
| statuses: [200], // Strictly only cache HTTP 200 OK | |
| }), | |
| new ExpirationPlugin({ | |
| maxEntries: 50, | |
| maxAgeSeconds: 24 * 60 * 60, // 1 Day | |
| }), | |
| ], | |
| }) | |
| ); | |
| // API requests (e.g., to external services like Google Gemini) are made | |
| // directly from the client and are not intercepted or cached by this | |
| // service worker. This avoids misconfiguration and unintended caching of | |
| // sensitive or dynamic API traffic. |
There was a problem hiding this comment.
Done in fd3babc. Removed the NetworkOnly route for /auth//moderation and the NetworkFirst route for /api/ — neither matches actual network traffic (Gemini calls go directly to generativelanguage.googleapis.com; auth and moderation are localStorage-only). Added a comment documenting why. Only the static asset and navigation routes remain.
wrangler.toml
Outdated
| # Cloudflare Pages configuration | ||
| # https://developers.cloudflare.com/pages/configuration/wrangler-configuration/ | ||
| name = "se-architect" | ||
| compatibility_date = "2024-09-23" |
There was a problem hiding this comment.
The wrangler.toml file uses compatibility_date "2024-09-23", which is from September 2024. The current date is February 2026, so this compatibility date is over a year old. Cloudflare recommends using a recent compatibility date to ensure you're getting the latest features and security fixes.
Consider updating this to a more recent date, such as "2025-12-01" or later. Note that updating the compatibility date may introduce breaking changes, so review the Cloudflare Workers changelog before updating.
| compatibility_date = "2024-09-23" | |
| compatibility_date = "2025-12-01" |
|
@copilot apply changes based on the comments in this thread |
…e validation Co-authored-by: Krosebrook <214532761+Krosebrook@users.noreply.github.com>
|
@copilot apply changes based on the comments in this thread |
All review changes from that thread were already applied in commit |
Review Summary by QodoFix build entry point, downgrade react-window, add deployment configs and security headers
WalkthroughsDescription• Fix missing Vite entry point by adding module script to index.html • Downgrade react-window from v2 to v1 to fix API compatibility • Add Vercel and Cloudflare Pages deployment configurations • Implement code splitting with manual chunks for better performance • Move service worker registration from inline script to index.tsx • Add security headers and CSP policies for both platforms • Wrap Workbox CDN import in try/catch for graceful fallback • Remove dead service worker routes (/api/, /auth, /moderation) • Add Node.js version pinning (v20) and build validation scripts • Update README with corrected commands and deployment instructions Diagramflowchart LR
A["Missing Vite Entry Point"] -->|Add module script| B["index.html fixed"]
C["react-window v2 API break"] -->|Downgrade to v1| D["API compatibility restored"]
E["No deployment config"] -->|Add vercel.json| F["Vercel deployment ready"]
E -->|Add wrangler.toml| G["Cloudflare Pages ready"]
H["Single 523kB bundle"] -->|Manual code splitting| I["Optimized chunks ~263kB gzipped"]
J["Inline SW registration"] -->|Move to index.tsx| K["Reduced inline scripts"]
L["Dead SW routes"] -->|Remove unused handlers| M["Cleaner service worker"]
N["No security headers"] -->|Add CSP/headers| O["Enhanced security posture"]
File ChangesView more (8)9. public/_headers
|
Code Review by Qodo
1. API key env var inconsistency
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fd3babca43
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| try { | ||
| importScripts('https://storage.googleapis.com/workbox-cdn/releases/6.4.1/workbox-sw.js'); | ||
| } catch (e) { | ||
| console.error('Failed to load Workbox from CDN:', e); |
There was a problem hiding this comment.
Re-throw Workbox load failures to avoid no-op SW install
Swallowing importScripts failures here lets the service worker install even when Workbox cannot be fetched (for example, first load behind a firewall blocking storage.googleapis.com or during a transient CDN outage), but then no routes are registered and offline/caching behavior is silently disabled. Because the script still installs successfully, clients can remain stuck on this no-op worker until sw.js changes, instead of naturally retrying installation on later loads.
Useful? React with 👍 / 👎.
| # - Monitor API usage regularly for unexpected spikes or anomalies. | ||
| # - For production applications, strongly consider using a backend proxy service to call | ||
| # the Gemini API so that the API key is never exposed to the client. | ||
| GEMINI_API_KEY=your_google_gemini_api_key_here |
There was a problem hiding this comment.
1. Api key env var inconsistency 🐞 Bug ✓ Correctness
The PR introduces/standardizes GEMINI_API_KEY in .env.example/README, but the runtime codebase and in-app/runbook/deployment docs still reference API_KEY. Because the build injects env.GEMINI_API_KEY into process.env.API_KEY, a user who sets only API_KEY (per older docs/UI) will hit the runtime "API Key is missing" error and core synthesis will fail.
Agent Prompt
### Issue description
The repo now documents `GEMINI_API_KEY`, but runtime code and several docs/UI strings still reference `API_KEY`. Because the current build-time injection only sources `env.GEMINI_API_KEY`, setting `API_KEY` alone will break synthesis at runtime.
### Issue Context
Vite `define` performs string replacement for `process.env.*` keys. Ensure both env var names work (to avoid breaking existing deploys) and ensure documentation/UI consistently instruct the correct key.
### Fix Focus Areas
- vite.config.ts[12-16]
- services/geminiService.ts[9-14]
- components/DocumentationView.tsx[17-22]
- docs/DEPLOYMENT.md[19-24]
- docs/RUNBOOK.md[12-18]
- docs/SECURITY.md[33-36]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 11 out of 13 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (1)
public/sw.js:45
- The navigation route caches responses under the requested URL (e.g.
/some/routeafter SPA rewrites). The current offline catch handler (later in this file) falls back tocaches.match('/index.html'), which may never be cached, so offline navigation can fail. Consider precaching/index.html(or caching it under a stable key) and/or having the catch handler trycaches.match(event.request)first with a safe fallback if no match is found.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
index.tsx
Outdated
| if ('serviceWorker' in navigator) { | ||
| window.addEventListener('load', () => { | ||
| navigator.serviceWorker.register('/sw.js').then(registration => { | ||
| console.log('SW registered: ', registration); | ||
| }).catch(registrationError => { | ||
| console.log('SW registration failed: ', registrationError); | ||
| }); | ||
| }); | ||
| } No newline at end of file |
There was a problem hiding this comment.
Service worker registration runs in all modes, including vite dev. Since this service worker caches scripts/styles via Workbox, it can interfere with HMR and cause confusing “stale” dev behavior. Consider gating registration to production only (e.g., if (import.meta.env.PROD && 'serviceWorker' in navigator) ...) or explicitly unregistering in development.
There was a problem hiding this comment.
Already applied in c928125. index.tsx now uses import.meta.env.PROD && 'serviceWorker' in navigator so the service worker is never registered during vite dev.
|
@copilot apply changes based on the comments in this thread |
Co-authored-by: Krosebrook <214532761+Krosebrook@users.noreply.github.com>
All review changes from that thread were already applied in |
|
@copilot apply changes based on the comments in this thread |
That review thread contained one comment — gating SW registration to production only — which was already applied in |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 11 out of 14 changed files in this pull request and generated 2 comments.
Comments suppressed due to low confidence (1)
public/sw.js:45
- The navigation
registerRoutewill cache responses keyed by the original navigation URL (e.g./foo), not necessarily/index.html(even if the host rewrites toindex.html). If your catch handler later tries to serve/index.htmlfrom cache, deep-link offline fallback may miss. Consider using Workbox’s navigation handler bound to/index.html(app-shell style) and/or explicitly caching/index.htmlso offline navigations consistently work across routes.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| ### Prerequisites | ||
| - Node.js v18+ | ||
| - Node.js v20+ (see `.nvmrc`) |
There was a problem hiding this comment.
README still states Build: Vite (Assumed) / Custom Webpack, but the repo is definitively Vite-based (vite.config.ts, npm scripts). Consider updating that line to avoid confusing deployers/debugging (especially since this PR is about fixing the Vite build entry point).
| "react-dom": "^19.2.4", | ||
| "react-markdown": "^10.1.0", | ||
| "react-window": "^2.2.7", | ||
| "react-window": "^1.8.11", |
There was a problem hiding this comment.
@types/react-window is currently listed under dependencies, while other @types/* packages (e.g. @types/node) are in devDependencies. Since this is build-time TypeScript-only metadata, consider moving @types/react-window to devDependencies for consistency and to avoid shipping type packages as runtime deps.
|
@copilot apply changes based on the comments in this thread |
1 similar comment
|
@copilot apply changes based on the comments in this thread |
The app produced an empty build (no JS bundle) and had a broken
react-windowimport. Neither Vercel nor Cloudflare Pages had deployment configuration.Build Fixes
index.htmlhad no<script type="module" src="/index.tsx">— Vite emitted only 2 files with zero app code. Added the module script tag.react-windowv2 API break: Package was pinned to^2.2.7(v2 droppedVariableSizeList), butOutputDisplay.tsxand@types/react-windowtarget the v1 API. Downgraded to^1.8.11.sw.jsmissing from dist: Service worker moved topublic/sw.jsso Vite copies it todist/during build. Root-levelsw.jsremoved to eliminate ambiguity.Deployment Config
vercel.json— SPA catch-all rewrite, security headers (X-Frame-Options: SAMEORIGIN,X-Content-Type-Options,Referrer-Policy,Permissions-Policy), immutable cache on/assets/*,no-cacheon/sw.jswrangler.toml— Cloudflare Pages build config (pages_build_output_dir = "dist",compatibility_date = "2025-12-01")public/_headers— Cloudflare Pages security headers +Content-Security-Policy-Report-Onlywithout'unsafe-inline'(reports violations from remaining inline scripts without blocking); includesstorage.googleapis.comfor Workbox CDN.env.example— DocumentsGEMINI_API_KEYwith full security guidance: key is publicly visible in the browser bundle, recommends usage quotas, anomaly monitoring, and a backend proxy for production.nvmrc— Pins Node 20 LTSService Worker
importScripts()for Workbox CDN is now wrapped intry/catchfor graceful fallback if the CDN is unavailable/api/,/auth,/moderation) that never matched actual network traffic — this SPA calls Gemini directly atgenerativelanguage.googleapis.comand handles auth/moderation entirely client-side<script>inindex.htmltoindex.tsx(afterroot.render()) to eliminate the race condition and reduce inline script surface areaimport.meta.env.PRODso it only runs in production builds, preventing Workbox from interfering with HMR in developmentPerformance
Added
manualChunkstovite.config.tsto split the single 523 kB bundle intovendor/rendering/ui/aichunks (react-windowgrouped withreact-markdown/remark-gfminrendering;lucide-reactisolated inuifor better cache hit rates); largest chunk drops to ~263 kB gzipped.Other
package.json: addedengines.node >=20,typecheck(tsc --noEmit), andsmoke(typecheck && build+ automateddist/output validation) scriptstsconfig.json: addedvite/clienttotypesto enable TypeScript support forimport.meta.env.gitignore: explicitly excludes.env,.env.production,.env.stagingREADME.md: correctsnpm start→npm run dev, fixes env var name, adds Vercel and Cloudflare deployment steps, clarifies the two distinct smoke checks (npm run smokefor build validation vs.smokeCheck()for AI content validation)✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.
Summary by cubic
Prepares the app for production on Vercel and Cloudflare with a fixed Vite entry, SPA rewrites, hardened headers, and a reliable build smoke check. Also resolves the react-window API mismatch and gates the service worker to production with Vite type support.
Migration
Performance
Written for commit c928125. Summary will update on new commits.