Add Next.js 16 website (docs, playground, sandbox, install)#407
Conversation
Adds the GocciaScript marketing/docs site under `website/`: - **Docs** — markdown pages synced from `../docs/` at build time via `scripts/sync-docs.mjs`. Custom markdown parser supports Pascal/JS/TS/ C++/C#/Crystal/JSON syntax highlighting, Mermaid diagrams, anchor headings, cross-doc links, and a mobile-collapsible sidebar. - **Playground** — runs real GocciaScript via spawning the bundled `GocciaScriptLoader` from `/api/run`. Rate-limited per IP. Ghost-text autocomplete, smart bracket pairing, auto-indent on Enter, ASI toggle, share-link round-trip (code + backend + ASI). Version selector reads the latest GitHub releases and shows precedence-ordered pins (latest patch, previous minor cycle's latest patch, previous major's latest patch — falling back when major doesn't exist). - **Sandbox / install** — landing-page sandbox, hero with runnable example, install instructions tab with latest release tag from GitHub. - **Site shell** — mobile hamburger menu, theme toggle (light/dark with espresso accent), sticky nav with sliding active-tab indicator, GitHub stars badge, OG/Twitter images. Stack: Next.js 16 (App Router, Turbopack), React 19, Tailwind v4, Biome, Bun. Mermaid is dynamically imported so it only ships on pages that need it. Doc-checker scripts (`check-doc-links.ts`, `check-doc-length.ts`, `check-doc-duplication.ts`) and `markdownlint-cli2` config grow a small ignore list for `website/content/docs/**` (build-time synced copies already validated at the source location) and nested `node_modules`, plus the `website/CLAUDE.md` Claude-Code @-include shim. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a new Next.js website under Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Browser
participant API as Next.js /api/run
participant RL as RateLimiter
participant Spawner as ProcessSpawner
participant Child as GocciaScriptLoader
participant WD as Watchdog
Client->>API: POST /api/run { code, mode, asi }
API->>RL: rateLimit(clientIp)
RL-->>API: allowed / denied
alt denied
API-->>Client: 429 + Retry-After
else allowed
API->>API: validate headers/body sizes
alt invalid
API-->>Client: 400 / 413 { error }
else valid
API->>Spawner: spawn(childPath, args, env)
Spawner->>Child: start process
API->>WD: start timeout watchdog
par stream handling
Child-->>API: stdout / stderr chunks
API-->>API: enforce per-stream caps, accumulate
and abort/watchdog
Client->>API: client abort
API->>Child: SIGKILL
WD->>Child: SIGKILL
end
Child-->>Spawner: exit code, signal, outputs
API-->>Client: final JSON response (ok/value/output/error/timing/truncated/stderr)
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
|
Benchmark Results386 benchmarks Interpreted: 🟢 171 improved · 🔴 48 regressed · 167 unchanged · avg +3.9% arraybuffer.js — Interp: 🟢 7, 7 unch. · avg +4.2% · Bytecode: 🟢 8, 🔴 2, 4 unch. · avg +2.9%
arrays.js — Interp: 🟢 16, 3 unch. · avg +3.9% · Bytecode: 🟢 9, 10 unch. · avg +2.8%
async-await.js — Interp: 🟢 3, 3 unch. · avg +2.9% · Bytecode: 🟢 4, 2 unch. · avg +6.9%
base64.js — Interp: 🟢 5, 5 unch. · avg +3.9% · Bytecode: 🟢 9, 1 unch. · avg +5.7%
classes.js — Interp: 🟢 15, 16 unch. · avg +2.8% · Bytecode: 🟢 1, 🔴 2, 28 unch. · avg -0.4%
closures.js — Interp: 🟢 3, 🔴 1, 7 unch. · avg +1.7% · Bytecode: 🟢 2, 🔴 2, 7 unch. · avg -0.1%
collections.js — Interp: 🟢 12 · avg +10.1% · Bytecode: 🟢 11, 🔴 1 · avg +11.1%
csv.js — Interp: 🔴 5, 8 unch. · avg -2.4% · Bytecode: 🔴 4, 9 unch. · avg -0.5%
destructuring.js — Interp: 🟢 9, 🔴 1, 12 unch. · avg +2.0% · Bytecode: 🟢 12, 🔴 6, 4 unch. · avg +3.0%
fibonacci.js — Interp: 🟢 7, 1 unch. · avg +4.7% · Bytecode: 🟢 1, 🔴 1, 6 unch. · avg +0.2%
float16array.js — Interp: 🟢 9, 🔴 10, 13 unch. · avg +9.3% · Bytecode: 🟢 17, 🔴 6, 9 unch. · avg +16.8%
for-of.js — Interp: 7 unch. · avg +0.9% · Bytecode: 🟢 6, 1 unch. · avg +4.2%
helpers/bench-module.js — Interp: 0 · Bytecode: 0
iterators.js — Interp: 🟢 38, 4 unch. · avg +5.7% · Bytecode: 🟢 18, 🔴 2, 22 unch. · avg +2.4%
json.js — Interp: 🟢 1, 🔴 3, 16 unch. · avg -0.4% · Bytecode: 🟢 7, 🔴 2, 11 unch. · avg +1.4%
jsx.jsx — Interp: 🟢 5, 🔴 3, 13 unch. · avg +1.1% · Bytecode: 🟢 1, 🔴 8, 12 unch. · avg -1.9%
modules.js — Interp: 🟢 3, 6 unch. · avg +1.9% · Bytecode: 🟢 4, 🔴 1, 4 unch. · avg +1.1%
numbers.js — Interp: 🟢 1, 🔴 3, 7 unch. · avg -0.4% · Bytecode: 🟢 4, 🔴 1, 6 unch. · avg +1.5%
objects.js — Interp: 🔴 2, 5 unch. · avg -1.7% · Bytecode: 🟢 1, 🔴 4, 2 unch. · avg -2.8%
promises.js — Interp: 🟢 2, 10 unch. · avg +0.5% · Bytecode: 🟢 9, 🔴 1, 2 unch. · avg +5.9%
regexp.js — Interp: 🔴 9, 2 unch. · avg -3.9% · Bytecode: 🔴 9, 2 unch. · avg -4.2%
strings.js — Interp: 🟢 6, 🔴 2, 11 unch. · avg +2.3% · Bytecode: 🟢 11, 🔴 1, 7 unch. · avg +34.5%
tsv.js — Interp: 🟢 1, 🔴 6, 2 unch. · avg -3.4% · Bytecode: 🟢 1, 🔴 5, 3 unch. · avg -4.5%
typed-arrays.js — Interp: 🟢 22 · avg +14.2% · Bytecode: 🟢 22 · avg +10.9%
uint8array-encoding.js — Interp: 🟢 6, 🔴 3, 9 unch. · avg +11.6% · Bytecode: 🔴 11, 7 unch. · avg -17.4%
Measured on ubuntu-latest x64. Benchmark ranges compare cached main-branch min/max ops/sec with the PR run; overlapping ranges are treated as unchanged noise. Percentage deltas are secondary context. |
Suite Timing
Measured on ubuntu-latest x64. |
There was a problem hiding this comment.
Actionable comments posted: 10
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (12)
website/src/components/sandbox.tsx-39-55 (1)
39-55:⚠️ Potential issue | 🟡 MinorThe “this month” example is date-coupled to April 2026.
The flow hardcodes
2026-04.json; starting May 1, 2026 this conflicts with the “this month” prompt copy. Consider using a neutral filename (e.g.,transactions.current.json) to keep the example evergreen.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/sandbox.tsx` around lines 39 - 55, The example is date-coupled to "2026-04.json"; update the hardcoded filename in the steps array (the "call" strings that reference 2026-04.json: the cat and both jq calls) to a neutral, evergreen name such as "transactions.current.json" (and adjust any path like "/tmp/agent/transactions/2026-04.json" to "/tmp/agent/transactions/transactions.current.json"); ensure any adjacent prompt copy that says “this month” or references April is consistent with the neutral filename so the example remains valid after April 2026.website/src/lib/share.ts-35-38 (1)
35-38:⚠️ Potential issue | 🟡 MinorValidate
modebefore returning decoded payload.
decodeSharecurrently returnsmodeas-is. Non-string values can escape while the function advertisesmode?: string.Suggested fix
return { code: parsed.code, - mode: parsed.mode, + mode: typeof parsed.mode === "string" ? parsed.mode : undefined, asi: typeof parsed.asi === "boolean" ? parsed.asi : undefined, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/lib/share.ts` around lines 35 - 38, The decodeShare function returns parsed.mode without type-checking, so non-string values can escape; update the return to validate mode is a string (e.g., only set mode when typeof parsed.mode === "string") and otherwise return undefined for mode; keep the existing handling for parsed.code and asi (asi check remains typeof parsed.asi === "boolean") so the returned payload matches the advertised types.website/README.md-23-25 (1)
23-25:⚠️ Potential issue | 🟡 MinorREADME has stale template details.
Line 23 should reference
src/app/page.tsx(notapp/page.tsx), and Line 25 mentions Geist even though the layout imports different fonts.Suggested doc fix
-You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file. - -This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel. +You can start editing the page by modifying `src/app/page.tsx`. The page auto-updates as you edit the file. + +This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) for local font optimization in the App Router.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/README.md` around lines 23 - 25, Update the README to point to the correct page entry and correct font info: change the path reference from app/page.tsx to src/app/page.tsx and replace the Geist/next/font mention with a note that the project’s fonts are imported in src/app/layout.tsx (or remove the specific font reference), ensuring the README accurately reflects the actual file path and the layout's imported fonts.website/src/app/robots.ts-3-15 (1)
3-15:⚠️ Potential issue | 🟡 MinorNormalize
SITE_URLbefore composingsitemap.If
NEXT_PUBLIC_SITE_URLends with/, Line 14 can emit a double slash (//sitemap.xml). Trim trailing slashes once at source.Suggested fix
-const SITE_URL = process.env.NEXT_PUBLIC_SITE_URL ?? "https://gocciascript.dev"; +const SITE_URL = (process.env.NEXT_PUBLIC_SITE_URL ?? "https://gocciascript.dev").replace(/\/+$/, "");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/app/robots.ts` around lines 3 - 15, The SITE_URL constant may include a trailing slash causing robots().sitemap to produce a double slash; normalize SITE_URL by trimming any trailing slashes before composing the sitemap string (update the SITE_URL assignment or add normalization in robots()) so that sitemap uses the cleaned SITE_URL (reference symbols: SITE_URL and robots()) and then concatenate "/sitemap.xml" to the trimmed URL.website/src/lib/doc-source.ts-18-22 (1)
18-22:⚠️ Potential issue | 🟡 MinorDon't swallow all filesystem errors here.
The catch block returns
nullfor anyreadFilefailure, but the function's contract (documented in the JSDoc and expected by theDocscomponent) is thatnullmeans only missing files. Other errors—permissions, disk issues, etc.—should propagate as exceptions, not silently treated as "file not synced."Suggested fix
export async function readDocSource(id: string): Promise<string | null> { const page = DOC_PAGES.find((p) => p.id === id); if (!page?.file) return null; const full = path.join(DOCS_ROOT, page.file); try { return await fs.readFile(full, "utf8"); - } catch { - return null; + } catch (error) { + if ((error as NodeJS.ErrnoException).code === "ENOENT") return null; + throw error; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/lib/doc-source.ts` around lines 18 - 22, The current catch around fs.readFile swallows all errors and returns null; change it to catch the error object (e.g., err) and only return null when err.code === 'ENOENT' (file not found), otherwise rethrow the error so permission or disk errors are propagated; update the try/catch surrounding the await fs.readFile(full, "utf8") to implement this behavior and keep the rest of the function contract used by the Docs component intact.website/src/components/mermaid-block.tsx-34-39 (1)
34-39:⚠️ Potential issue | 🟡 MinorDiagrams won’t re-render when the site theme changes.
The theme is sampled once inside the effect, but the effect only depends on
code(Line 86). Toggling light/dark mode will leave already-rendered diagrams on the old Mermaid theme.🎨 Suggested fix (observe `data-theme` and include it in render deps)
export function MermaidBlock({ code }: { code: string }) { const [state, setState] = useState<State>({ kind: "loading" }); + const [theme, setTheme] = useState<"espresso" | "cream">("cream"); + + useEffect(() => { + const root = document.documentElement; + const sync = () => + setTheme(root.dataset.theme === "espresso" ? "espresso" : "cream"); + sync(); + const obs = new MutationObserver(sync); + obs.observe(root, { attributes: true, attributeFilter: ["data-theme"] }); + return () => obs.disconnect(); + }, []); useEffect(() => { @@ - const dark = - typeof document !== "undefined" && - document.documentElement.dataset.theme === "espresso"; + const dark = theme === "espresso"; @@ - }, [code]); + }, [code, theme]);Also applies to: 86-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/mermaid-block.tsx` around lines 34 - 39, The component samples the theme once (const dark) inside the effect that depends only on code, so diagrams don't update when data-theme changes; update the rendering effect in mermaid-block (where mermaid.initialize and the render logic live) to react to theme changes by including the current theme value in the effect dependencies or by observing document.documentElement.dataset.theme (e.g., via a MutationObserver) and triggering re-initialization/render when it changes; ensure you re-call mermaid.initialize with the new theme and re-render the diagram in the same effect/handler that currently uses code so diagrams update on theme toggle.website/scripts/sync-docs.mjs-50-56 (1)
50-56:⚠️ Potential issue | 🟡 MinorREADME copy status in the success log is currently unconditional.
Line 55 reports
+ READMEeven whenREADME.mddoes not exist. That makes sync status harder to trust during CI diagnostics.🧾 Proposed fix
- if (await exists(readmeSrc)) { + let copiedReadme = false; + if (await exists(readmeSrc)) { await fs.copyFile(readmeSrc, path.join(docsDest, "readme.md")); + copiedReadme = true; } @@ - `[sync-docs] copied ${docCount} doc files + README from ${path.relative(websiteRoot, repoRoot)}/docs → content/docs`, + `[sync-docs] copied ${docCount} doc files${copiedReadme ? " + README" : ""} from ${path.relative(websiteRoot, repoRoot)}/docs → content/docs`, );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/scripts/sync-docs.mjs` around lines 50 - 56, The success log unconditionally includes "+ README" even when readmeSrc wasn't present; change the logic to check and record whether README was actually copied. Use exists(readmeSrc) once, set a boolean like readmeCopied (based on the await exists(readmeSrc)), perform fs.copyFile only if true, and then update the console.log message (the line that references docCount and the README text) to append "+ README" only when readmeCopied is true; reference the variables readmeSrc, docsDest, docCount and the console.log call to locate the change.website/src/components/markdown.tsx-363-366 (1)
363-366:⚠️ Potential issue | 🟡 MinorNormalize the Mermaid fence name before branching.
renderHighlighted()already lowercases info strings, but this branch only matches the exact literal"mermaid". A fence like```Mermaidwill render as plain code instead of a diagram.Suggested fix
- if (b.lang === "mermaid") { + if (b.lang.trim().toLowerCase() === "mermaid") { return <MermaidBlock key={idx} code={b.code} />; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/markdown.tsx` around lines 363 - 366, The branch that detects mermaid fences only checks b.lang === "mermaid", so capitalized fences like "Mermaid" are missed; normalize the language string before branching (e.g., use (b.lang || "").toLowerCase()) and change the condition in the case "code" block that returns <MermaidBlock .../> to compare the lowercased value to "mermaid" (referencing b.lang, the case "code" branch and MermaidBlock) so any capitalized or mixed-case fence names correctly render as diagrams.website/src/components/install.tsx-15-20 (1)
15-20:⚠️ Potential issue | 🟡 Minor
Method.languageis ignored, so every install snippet is highlighted as TypeScript.
CopyableCommandonly receivescommand, and Line 105 hardcodes"ts". That means the"shell"metadata onMETHODSnever affects rendering, so all command blocks are highlighted with the wrong lexer. Either threadm.languagethrough or drop the field until it is actually used.Also applies to: 68-69, 103-106, 188-193
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/install.tsx` around lines 15 - 20, The Method.language field is currently ignored because CopyableCommand is only passed command and hardcodes "ts"; update the rendering to thread the language through: add a language prop to the CopyableCommand component and update its consumer(s) (where METHODS is mapped and m.language is referenced) to pass m.language (with a sensible default, e.g., "ts" or "shell"), then update CopyableCommand to use that language value for syntax highlighting instead of the hardcoded "ts". Locate the METHODS entries, the map that renders each Method (where m.language exists), and the CopyableCommand definition/usage and ensure the prop name matches and is used for the lexer.website/src/components/docs.tsx-74-76 (1)
74-76:⚠️ Potential issue | 🟡 MinorUse an explicit null check for synced docs.
readDocSource()returnsnullfor a missing file, but an empty synced markdown file comes through as"". The truthy check here would render the “Not synced” stub for a valid empty doc.Suggested fix
- {source ? ( + {source !== null ? ( <Markdown source={source} /> ) : (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/docs.tsx` around lines 74 - 76, The conditional rendering uses a truthy check on `source`, which treats an empty string as falsy and incorrectly shows the “Not synced” stub; change the condition in the JSX that currently checks `source ?` to an explicit null check (e.g. `source !== null`) so that `readDocSource()` returning `""` (empty synced doc) renders via `Markdown` while only `null` shows the not-synced stub; update the JSX in the `docs.tsx` component where `readDocSource`, the `source` variable, and the `Markdown` rendering are used to implement this explicit null check.website/src/app/api/run/route.ts-93-97 (1)
93-97:⚠️ Potential issue | 🟡 MinorEnforce the source-size cap in bytes, not UTF-16 code units.
code.lengthlets multibyte input slip past the advertised 64 KiB limit, because the request body and stdin payload are UTF-8. UseBuffer.byteLength(code, "utf8")here so the limit matches what the runner actually receives.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/app/api/run/route.ts` around lines 93 - 97, The size check currently uses code.length which counts UTF-16 code units and can undercount bytes; change the check in the run route to measure UTF-8 byte length using Buffer.byteLength(code, "utf8") when comparing against MAX_CODE_BYTES so the enforced limit matches the actual bytes sent to the runner (update the conditional that references code and MAX_CODE_BYTES accordingly).website/src/lib/highlight.ts-179-238 (1)
179-238:⚠️ Potential issue | 🟡 MinorPascal block comments aren't tokenized in the generic highlighter.
pascalis declared as a supported language inLANG_KEYWORDS, buthighlightGeneric()only recognizes//and/* ... */comments. Standard Pascal comment forms like{ ... }and(* ... *)will be rendered as punctuation and text instead of being marked as comments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/lib/highlight.ts` around lines 179 - 238, highlightGeneric() currently only tokenizes '//' and '/* ... */' comments while LANG_KEYWORDS includes 'pascal'; update highlightGeneric() to also recognize Pascal comment forms by detecting '{' ... '}' and '(*' ... '*)' as comment tokens: when encountering '{' consume until the matching '}' and mark as a comment, and when encountering '(*' consume until the matching '*)' and mark as a comment (handle multiline content and ensure the scanner advances past the close); keep existing behavior for '//' and '/* ... */' and reuse the same comment token styling so Pascal block comments in the pascal entry of LANG_KEYWORDS are highlighted correctly.
🧹 Nitpick comments (7)
website/biome.json (1)
21-23: Avoid disablingnoArrayIndexKeyglobally.Turning this rule off at project scope can mask real reconciliation bugs in dynamic lists. Prefer keeping it enabled and suppressing only at specific static-map sites.
Suggested config tweak
"linter": { "enabled": true, "rules": { "recommended": true, "suspicious": { - "noArrayIndexKey": "off" + "noArrayIndexKey": "warn" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/biome.json` around lines 21 - 23, The project-level Biome config currently disables the suspicious rule noArrayIndexKey ("suspicious": { "noArrayIndexKey": "off" }), which masks real reconciliation bugs; restore the rule by removing this global "off" or set it to "warn"/"error" and instead add targeted suppressions at specific components that legitimately use array indices as keys. Locate the "noArrayIndexKey" entry in biome.json, change its value from "off" to "warn" or delete the key so the default rule applies, and add inline/inline-file suppressions only where you have confirmed static-map usage.website/src/lib/playground-storage.ts (1)
10-12: Harden parsed storage shape before casting.Accepting any object here can pull in unexpected prototypes/array payloads from corrupted storage. Normalize into a plain
Record<string, string>instead of direct casting.Suggested hardening
- const parsed = JSON.parse(raw) as unknown; - if (parsed && typeof parsed === "object") return parsed as CodeMap; + const parsed = JSON.parse(raw) as unknown; + if ( + parsed && + typeof parsed === "object" && + !Array.isArray(parsed) && + Object.getPrototypeOf(parsed) === Object.prototype + ) { + const map: CodeMap = {}; + for (const [k, v] of Object.entries(parsed)) { + if (typeof v === "string") map[k] = v; + } + return map; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/lib/playground-storage.ts` around lines 10 - 12, The code currently casts any parsed JSON object to CodeMap (the parsed variable) which can introduce arrays or objects with unexpected prototypes; instead, after JSON.parse(raw) validate that parsed is a plain object and build a normalized plain Record<string,string> (e.g., Object.create(null) style) by iterating Object.entries(parsed) and only copying keys whose values are strings, ignoring non-string values and prototype properties, then return that normalized object as the CodeMap; update the logic around parsed and the return path in the function that reads storage so it never directly casts arbitrary parsed to CodeMap.scripts/check-doc-links.ts (1)
28-40: Optional: centralize path-ignore matching shared across doc check scripts.The new prefix-ignore logic is good, but it now appears in multiple checkers. Extracting a shared helper would reduce drift risk when ignore rules change again.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/check-doc-links.ts` around lines 28 - 40, The prefix-ignore matching logic duplicated in findMarkdownFiles (the IGNORE_PATH_PREFIXES constant and the isIgnoredPath function) should be extracted into a shared helper to avoid drift; create a utility (e.g., export function isIgnoredPath(path: string, root: string): boolean and export const IGNORE_PATH_PREFIXES) and replace the inline IGNORE_PATH_PREFIXES and isIgnoredPath usage in findMarkdownFiles with calls to that shared helper so other doc-check scripts import and reuse the same logic.website/AGENTS.md (1)
2-4: Make this rule block more actionable.Consider adding a 2–3 bullet checklist (scope, what to read, what to verify before commit) so the instruction is operational rather than advisory.
Based on learnings: Applies to AGENTS.md : Document agent implementations in AGENTS.md with clear descriptions of purpose, capabilities, and usage patterns.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/AGENTS.md` around lines 2 - 4, Update the rule block under the "# This is NOT the Next.js you know" header in AGENTS.md to be actionable by adding a 2–3 item checklist: (1) scope — what parts of the repo or APIs this guidance applies to, (2) what to read — specific docs to consult (e.g., "node_modules/next/dist/docs/"), and (3) pre-commit verifications — concrete checks to perform before committing (compatibility, deprecations, test run). Also add a short note that agent implementations in AGENTS.md must include purpose, capabilities, and usage patterns so reviewers can validate completeness.scripts/check-doc-duplication.ts (1)
123-125: Consider centralizing doc-ignore path filtering in one shared helper.
IGNORE_PATH_PREFIXES+isIgnoredPath(...)is now repeated across the doc check scripts. Pulling this into a shared utility would reduce drift risk when ignore rules evolve.Also applies to: 139-142, 148-153
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/check-doc-duplication.ts` around lines 123 - 125, Centralize the repeated ignore logic by extracting the IGNORE_PATH_PREFIXES constant and the isIgnoredPath(...) function into a shared utility module (e.g., export const IGNORE_PATH_PREFIXES and export function isIgnoredPath(path: string): boolean), then replace the local definitions in check-doc-duplication.ts and the other doc-check scripts with imports from that utility; ensure the shared isIgnoredPath preserves current path-prefix matching semantics and update all call sites (references to IGNORE_PATH_PREFIXES and isIgnoredPath) to import from the new module so future changes only require editing the single utility.website/src/lib/docs-data.ts (1)
207-237: GenerateDOC_HREF_MAPfromDOC_PAGESto avoid drift.Lines 207-233 duplicate canonical doc slugs already defined in Lines 14-203. This is easy to desync when new docs are added.
♻️ Proposed refactor
+const DOC_HREF_BASE = Object.fromEntries( + DOC_PAGES.map((p) => [p.file.replace(/\.md$/i, ""), p.id]), +); + export const DOC_HREF_MAP: Record<string, string> = { - language: "language", - "language-tables": "language-tables", - "built-ins": "built-ins", - "built-ins-temporal": "built-ins-temporal", - "built-ins-binary-data": "built-ins-binary-data", - "built-ins-data-formats": "built-ins-data-formats", - errors: "errors", - architecture: "architecture", - interpreter: "interpreter", - "bytecode-vm": "bytecode-vm", - "core-patterns": "core-patterns", - "value-system": "value-system", - "garbage-collector": "garbage-collector", - "decision-log": "decision-log", - "adding-built-in-types": "adding-built-in-types", - embedding: "embedding", - testing: "testing", - "testing-api": "testing-api", - benchmarks: "benchmarks", - "build-system": "build-system", - profiling: "profiling", - goals: "goals", - tutorial: "tutorial", - "contributing/workflow": "contributing-workflow", - "contributing/code-style": "contributing-code-style", - "contributing/tooling": "contributing-tooling", + ...DOC_HREF_BASE, CONTRIBUTING: "contributing-workflow", AGENTS: "contributing-workflow", LICENSE: "contributing-workflow", };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/lib/docs-data.ts` around lines 207 - 237, DOC_HREF_MAP is a hand-maintained duplicate of DOC_PAGES and will drift; replace the hardcoded object by programmatically generating DOC_HREF_MAP from DOC_PAGES (e.g., iterate DOC_PAGES entries to map each page.slug to its canonical href), ensuring you normalize keys the same way the rest of the app expects and preserve the special legacy aliases (CONTRIBUTING, AGENTS, LICENSE -> "contributing-workflow") by merging them into the generated map; update the export so DOC_HREF_MAP is built at module load from DOC_PAGES rather than being manually listed.website/src/components/highlighted-code.tsx (1)
8-8:t.clsconditional can be simplified.Given the current token model,
clsis always present, so the ternary is redundant in both renderers.🧹 Optional cleanup
- <span key={idx} className={t.cls ? `tk-${t.cls}` : undefined}> + <span key={idx} className={`tk-${t.cls}`}> {t.text} </span>Also applies to: 27-27
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/highlighted-code.tsx` at line 8, The conditional ternary for className in the highlighted-code component is unnecessary because token.cls is always defined; replace occurrences like the span with key={idx} (and the similar occurrence around line 27) to always set className to the computed value (e.g., "tk-" + t.cls) instead of using t.cls ? `tk-${t.cls}` : undefined, updating both renderers in the component (look for the span with key={idx} and the other span/render call) so the className is consistently applied without the redundant conditional.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@website/scripts/sync-docs.mjs`:
- Around line 17-28: The helper exists() currently swallows all fs.access errors
and returns false, causing copyMarkdownTree (and the similar block at lines
~44-49) to silently succeed when the source is missing or unreadable; update
exists(p) to only return false for ENOENT (missing) and rethrow other errors (or
propagate them) so permission/IO errors fail fast, then ensure copyMarkdownTree
calls this updated exists and surface the thrown errors to abort the prebuild
rather than returning 0.
In `@website/src/app/api/run/route.ts`:
- Around line 124-127: The child process is currently spawned with env: {
...process.env, NO_COLOR: "1" } which leaks all server secrets; change spawn
usage so you build a minimal allowlisted environment instead of spreading
process.env. In the route handler where you create child via spawn(binary, args,
{ stdio: ..., env: ... }) construct env by explicitly including only the
variables the sandboxed loader needs (for example NO_COLOR and any specific
PATH/HOME or NODE_OPTIONS you legitimately require) and omit everything else;
then pass that env object to spawn. Ensure you do not use the spread operator on
process.env anywhere in this code path and reference the existing child variable
and spawn(binary, args, ...) call when applying the fix.
- Around line 68-79: The responses that use NextResponse.json currently return
error as a plain string while the normal runner path returns error as an object;
update every error branch (including the rate-limit branch using
NextResponse.json with retryAfter, and the other 400/413/429/500 branches) to
return a consistent error object shape—e.g., replace { error: "rate limit
exceeded", ... } with { error: { message: "rate limit exceeded", code:
"RATE_LIMIT" }, retryAfter } (or add a separate transportError object) so the
consumer expects error.message; ensure the same structure is used across all
branches (and adjust any types/interfaces that represent the endpoint response).
In `@website/src/app/opengraph-image.tsx`:
- Around line 9-30: The OG generation currently fetches fonts from FONT_URLS via
fetchFont inside OpenGraphImage which makes builds network-dependent; instead
vendor the three .ttf files into the repo (e.g., under public/fonts) and replace
the remote fetches with local fs.readFile calls: update FONT_URLS (or remove it)
to point to local file paths and change the Promise.all entries in
OpenGraphImage to read those files (or replace fetchFont to use fs.readFile) so
the instrumentSerifRegular, instrumentSerifItalic, and ibmPlexSans600 buffers
are loaded from disk like logoBuf.
In `@website/src/components/mermaid-block.tsx`:
- Around line 37-41: Change the mermaid configuration to use a stricter security
setting: in the mermaid.initialize call (see mermaid.initialize and the
securityLevel option) replace securityLevel: "loose" with securityLevel:
"strict" to disable HTML/click handlers and prevent XSS, and keep the existing
use of dangerouslySetInnerHTML but ensure diagram HTML is now rendered under the
"strict" policy.
In `@website/src/components/playground.tsx`:
- Around line 126-137: The version dropdown is only cosmetic because the run
handler doesn't send the selected version; update the run function to include
the version value in the POST payload (add version to JSON.stringify body sent
to fetch("/api/run") in run) and also include version in whatever share/save
payload logic (the code that builds the share payload around lines 214-227) so
the share state preserves the selected version; ensure the server side /api/run
and any replay/share consumers read and respect the version field you add.
- Around line 264-312: The toolbar controls and editor lack programmatic labels;
add unique ids and connect them with <label htmlFor="..."> or aria-labelledby
for each control (the Example select bound to exId/setExId and rendering
EXAMPLES, the Backend select bound to backend/setBackend and Backend type, the
ASI checkbox bound to asi/setAsi, and the Version select bound to
version/setVersion and versions) so screen readers announce their purpose, and
give the main editor textarea an accessible name via a visible label linked by
htmlFor or an aria-label/aria-labelledby that references the same id; ensure ids
are stable and unique so assistive tech can reliably associate labels with the
controls.
In `@website/src/lib/examples.ts`:
- Around line 261-291: The example object with id "data" has an output array
that includes "Package: goccia v2027.1.0" and "App name: GocciaScript" which are
not produced by the example code; update the output for the example (the object
with id "data" in website/src/lib/examples.ts) to match the actual console logs
emitted by the snippet—remove the two incorrect entries and ensure the output
array only contains the "Host: ...", "Features: ...", and "Menu rows: ..." log
entries (or alternatively add the corresponding console.log statements to the
code if those package/app logs were intended).
- Around line 193-224: The canned outputs for the "fetch" example (id "fetch",
functions fetchJoke and top-level joke) and the Temporal example (around lines
227-258) are nondeterministic and will drift; either freeze the inputs in the
example code (replace the live fetch with a deterministic/mock response and use
a fixed Temporal value) or make the expected output entries generic (e.g., keep
the "Today's dad joke:" log but replace the exact joke string with a generic
placeholder or pattern, and replace the exact Temporal string with a stable
placeholder or message indicating a formatted time). Update the examples' code
or their corresponding output array accordingly so the preview remains stable.
In `@website/src/lib/rate-limit.ts`:
- Around line 59-67: The clientIp function currently trusts x-forwarded-for
first which is spoofable; change clientIp to prefer platform-injected headers
(cf-connecting-ip, x-real-ip, x-vercel-forwarded-for) first and only fall back
to x-forwarded-for as a last resort; when falling back to x-forwarded-for, parse
the header and return the right-most non-empty segment (trimmed) rather than the
left-most so that, when used, you are more likely to pick the proxy-appended
address; update the logic in clientIp to check headers.get("cf-connecting-ip"),
headers.get("x-real-ip"), headers.get("x-vercel-forwarded-for") before
consulting headers.get("x-forwarded-for") and use the last comma-separated value
as the fallback.
---
Minor comments:
In `@website/README.md`:
- Around line 23-25: Update the README to point to the correct page entry and
correct font info: change the path reference from app/page.tsx to
src/app/page.tsx and replace the Geist/next/font mention with a note that the
project’s fonts are imported in src/app/layout.tsx (or remove the specific font
reference), ensuring the README accurately reflects the actual file path and the
layout's imported fonts.
In `@website/scripts/sync-docs.mjs`:
- Around line 50-56: The success log unconditionally includes "+ README" even
when readmeSrc wasn't present; change the logic to check and record whether
README was actually copied. Use exists(readmeSrc) once, set a boolean like
readmeCopied (based on the await exists(readmeSrc)), perform fs.copyFile only if
true, and then update the console.log message (the line that references docCount
and the README text) to append "+ README" only when readmeCopied is true;
reference the variables readmeSrc, docsDest, docCount and the console.log call
to locate the change.
In `@website/src/app/api/run/route.ts`:
- Around line 93-97: The size check currently uses code.length which counts
UTF-16 code units and can undercount bytes; change the check in the run route to
measure UTF-8 byte length using Buffer.byteLength(code, "utf8") when comparing
against MAX_CODE_BYTES so the enforced limit matches the actual bytes sent to
the runner (update the conditional that references code and MAX_CODE_BYTES
accordingly).
In `@website/src/app/robots.ts`:
- Around line 3-15: The SITE_URL constant may include a trailing slash causing
robots().sitemap to produce a double slash; normalize SITE_URL by trimming any
trailing slashes before composing the sitemap string (update the SITE_URL
assignment or add normalization in robots()) so that sitemap uses the cleaned
SITE_URL (reference symbols: SITE_URL and robots()) and then concatenate
"/sitemap.xml" to the trimmed URL.
In `@website/src/components/docs.tsx`:
- Around line 74-76: The conditional rendering uses a truthy check on `source`,
which treats an empty string as falsy and incorrectly shows the “Not synced”
stub; change the condition in the JSX that currently checks `source ?` to an
explicit null check (e.g. `source !== null`) so that `readDocSource()` returning
`""` (empty synced doc) renders via `Markdown` while only `null` shows the
not-synced stub; update the JSX in the `docs.tsx` component where
`readDocSource`, the `source` variable, and the `Markdown` rendering are used to
implement this explicit null check.
In `@website/src/components/install.tsx`:
- Around line 15-20: The Method.language field is currently ignored because
CopyableCommand is only passed command and hardcodes "ts"; update the rendering
to thread the language through: add a language prop to the CopyableCommand
component and update its consumer(s) (where METHODS is mapped and m.language is
referenced) to pass m.language (with a sensible default, e.g., "ts" or "shell"),
then update CopyableCommand to use that language value for syntax highlighting
instead of the hardcoded "ts". Locate the METHODS entries, the map that renders
each Method (where m.language exists), and the CopyableCommand definition/usage
and ensure the prop name matches and is used for the lexer.
In `@website/src/components/markdown.tsx`:
- Around line 363-366: The branch that detects mermaid fences only checks b.lang
=== "mermaid", so capitalized fences like "Mermaid" are missed; normalize the
language string before branching (e.g., use (b.lang || "").toLowerCase()) and
change the condition in the case "code" block that returns <MermaidBlock .../>
to compare the lowercased value to "mermaid" (referencing b.lang, the case
"code" branch and MermaidBlock) so any capitalized or mixed-case fence names
correctly render as diagrams.
In `@website/src/components/mermaid-block.tsx`:
- Around line 34-39: The component samples the theme once (const dark) inside
the effect that depends only on code, so diagrams don't update when data-theme
changes; update the rendering effect in mermaid-block (where mermaid.initialize
and the render logic live) to react to theme changes by including the current
theme value in the effect dependencies or by observing
document.documentElement.dataset.theme (e.g., via a MutationObserver) and
triggering re-initialization/render when it changes; ensure you re-call
mermaid.initialize with the new theme and re-render the diagram in the same
effect/handler that currently uses code so diagrams update on theme toggle.
In `@website/src/components/sandbox.tsx`:
- Around line 39-55: The example is date-coupled to "2026-04.json"; update the
hardcoded filename in the steps array (the "call" strings that reference
2026-04.json: the cat and both jq calls) to a neutral, evergreen name such as
"transactions.current.json" (and adjust any path like
"/tmp/agent/transactions/2026-04.json" to
"/tmp/agent/transactions/transactions.current.json"); ensure any adjacent prompt
copy that says “this month” or references April is consistent with the neutral
filename so the example remains valid after April 2026.
In `@website/src/lib/doc-source.ts`:
- Around line 18-22: The current catch around fs.readFile swallows all errors
and returns null; change it to catch the error object (e.g., err) and only
return null when err.code === 'ENOENT' (file not found), otherwise rethrow the
error so permission or disk errors are propagated; update the try/catch
surrounding the await fs.readFile(full, "utf8") to implement this behavior and
keep the rest of the function contract used by the Docs component intact.
In `@website/src/lib/highlight.ts`:
- Around line 179-238: highlightGeneric() currently only tokenizes '//' and '/*
... */' comments while LANG_KEYWORDS includes 'pascal'; update
highlightGeneric() to also recognize Pascal comment forms by detecting '{' ...
'}' and '(*' ... '*)' as comment tokens: when encountering '{' consume until the
matching '}' and mark as a comment, and when encountering '(*' consume until the
matching '*)' and mark as a comment (handle multiline content and ensure the
scanner advances past the close); keep existing behavior for '//' and '/* ...
*/' and reuse the same comment token styling so Pascal block comments in the
pascal entry of LANG_KEYWORDS are highlighted correctly.
In `@website/src/lib/share.ts`:
- Around line 35-38: The decodeShare function returns parsed.mode without
type-checking, so non-string values can escape; update the return to validate
mode is a string (e.g., only set mode when typeof parsed.mode === "string") and
otherwise return undefined for mode; keep the existing handling for parsed.code
and asi (asi check remains typeof parsed.asi === "boolean") so the returned
payload matches the advertised types.
---
Nitpick comments:
In `@scripts/check-doc-duplication.ts`:
- Around line 123-125: Centralize the repeated ignore logic by extracting the
IGNORE_PATH_PREFIXES constant and the isIgnoredPath(...) function into a shared
utility module (e.g., export const IGNORE_PATH_PREFIXES and export function
isIgnoredPath(path: string): boolean), then replace the local definitions in
check-doc-duplication.ts and the other doc-check scripts with imports from that
utility; ensure the shared isIgnoredPath preserves current path-prefix matching
semantics and update all call sites (references to IGNORE_PATH_PREFIXES and
isIgnoredPath) to import from the new module so future changes only require
editing the single utility.
In `@scripts/check-doc-links.ts`:
- Around line 28-40: The prefix-ignore matching logic duplicated in
findMarkdownFiles (the IGNORE_PATH_PREFIXES constant and the isIgnoredPath
function) should be extracted into a shared helper to avoid drift; create a
utility (e.g., export function isIgnoredPath(path: string, root: string):
boolean and export const IGNORE_PATH_PREFIXES) and replace the inline
IGNORE_PATH_PREFIXES and isIgnoredPath usage in findMarkdownFiles with calls to
that shared helper so other doc-check scripts import and reuse the same logic.
In `@website/AGENTS.md`:
- Around line 2-4: Update the rule block under the "# This is NOT the Next.js
you know" header in AGENTS.md to be actionable by adding a 2–3 item checklist:
(1) scope — what parts of the repo or APIs this guidance applies to, (2) what to
read — specific docs to consult (e.g., "node_modules/next/dist/docs/"), and (3)
pre-commit verifications — concrete checks to perform before committing
(compatibility, deprecations, test run). Also add a short note that agent
implementations in AGENTS.md must include purpose, capabilities, and usage
patterns so reviewers can validate completeness.
In `@website/biome.json`:
- Around line 21-23: The project-level Biome config currently disables the
suspicious rule noArrayIndexKey ("suspicious": { "noArrayIndexKey": "off" }),
which masks real reconciliation bugs; restore the rule by removing this global
"off" or set it to "warn"/"error" and instead add targeted suppressions at
specific components that legitimately use array indices as keys. Locate the
"noArrayIndexKey" entry in biome.json, change its value from "off" to "warn" or
delete the key so the default rule applies, and add inline/inline-file
suppressions only where you have confirmed static-map usage.
In `@website/src/components/highlighted-code.tsx`:
- Line 8: The conditional ternary for className in the highlighted-code
component is unnecessary because token.cls is always defined; replace
occurrences like the span with key={idx} (and the similar occurrence around line
27) to always set className to the computed value (e.g., "tk-" + t.cls) instead
of using t.cls ? `tk-${t.cls}` : undefined, updating both renderers in the
component (look for the span with key={idx} and the other span/render call) so
the className is consistently applied without the redundant conditional.
In `@website/src/lib/docs-data.ts`:
- Around line 207-237: DOC_HREF_MAP is a hand-maintained duplicate of DOC_PAGES
and will drift; replace the hardcoded object by programmatically generating
DOC_HREF_MAP from DOC_PAGES (e.g., iterate DOC_PAGES entries to map each
page.slug to its canonical href), ensuring you normalize keys the same way the
rest of the app expects and preserve the special legacy aliases (CONTRIBUTING,
AGENTS, LICENSE -> "contributing-workflow") by merging them into the generated
map; update the export so DOC_HREF_MAP is built at module load from DOC_PAGES
rather than being manually listed.
In `@website/src/lib/playground-storage.ts`:
- Around line 10-12: The code currently casts any parsed JSON object to CodeMap
(the parsed variable) which can introduce arrays or objects with unexpected
prototypes; instead, after JSON.parse(raw) validate that parsed is a plain
object and build a normalized plain Record<string,string> (e.g.,
Object.create(null) style) by iterating Object.entries(parsed) and only copying
keys whose values are strings, ignoring non-string values and prototype
properties, then return that normalized object as the CodeMap; update the logic
around parsed and the return path in the function that reads storage so it never
directly casts arbitrary parsed to CodeMap.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0d4b902a-c636-4bb4-a955-3a1451460853
⛔ Files ignored due to path filters (3)
website/bun.lockis excluded by!**/*.lockwebsite/public/logo.pngis excluded by!**/*.pngwebsite/src/app/favicon.icois excluded by!**/*.ico
📒 Files selected for processing (50)
.markdownlint-cli2.jsoncscripts/check-doc-duplication.tsscripts/check-doc-length.tsscripts/check-doc-links.tswebsite/.gitignorewebsite/AGENTS.mdwebsite/CLAUDE.mdwebsite/README.mdwebsite/biome.jsonwebsite/next.config.tswebsite/package.jsonwebsite/postcss.config.mjswebsite/scripts/sync-docs.mjswebsite/src/app/api/run/route.tswebsite/src/app/docs/[id]/page.tsxwebsite/src/app/docs/page.tsxwebsite/src/app/globals.csswebsite/src/app/install/page.tsxwebsite/src/app/layout.tsxwebsite/src/app/opengraph-image.tsxwebsite/src/app/page.tsxwebsite/src/app/playground/page.tsxwebsite/src/app/robots.tswebsite/src/app/sandbox/page.tsxwebsite/src/app/sitemap.tswebsite/src/app/template.tsxwebsite/src/app/twitter-image.tsxwebsite/src/components/anchor-heading.tsxwebsite/src/components/docs.tsxwebsite/src/components/highlighted-code.tsxwebsite/src/components/highlighted-textarea.tsxwebsite/src/components/icons.tsxwebsite/src/components/install.tsxwebsite/src/components/landing.tsxwebsite/src/components/markdown.tsxwebsite/src/components/mermaid-block.tsxwebsite/src/components/playground.tsxwebsite/src/components/sandbox.tsxwebsite/src/components/site-shell.tsxwebsite/src/lib/doc-source.tswebsite/src/lib/docs-data.tswebsite/src/lib/examples.tswebsite/src/lib/format-error.tswebsite/src/lib/github.tswebsite/src/lib/highlight.tswebsite/src/lib/landing-data.tswebsite/src/lib/playground-storage.tswebsite/src/lib/rate-limit.tswebsite/src/lib/share.tswebsite/tsconfig.json
Security & robustness:
- `api/run/route.ts` — replace `{ ...process.env, NO_COLOR: "1" }` with a
minimal allowlist (PATH/HOME/TMPDIR/LANG/LC_ALL/TZ/NODE_ENV) so server
secrets aren't forwarded into the sandboxed loader. Switch the early-
error responses to a uniform `{ error: { message, code } }` shape via a
`transportError` helper. Use `Buffer.byteLength(code, "utf8")` for the
size check instead of UTF-16 `code.length`.
- `lib/rate-limit.ts` — prefer platform-injected headers (cf-connecting-
ip / x-real-ip / x-vercel-forwarded-for); only fall back to
`x-forwarded-for`, and pick the right-most segment when we do.
- `components/mermaid-block.tsx` — switch `securityLevel` to `"strict"`
(defense in depth — our diagrams don't need click handlers or HTML
interpolation). Observe `<html data-theme>` via MutationObserver and
re-render on theme toggle.
- `lib/doc-source.ts`, `scripts/sync-docs.mjs` — only swallow `ENOENT`
in `try/catch` around `fs.readFile`/`fs.access`; rethrow permission /
IO errors so they surface instead of being silently masked.
Accessibility:
- `components/playground.tsx` — toolbar `<span>` "labels" become real
`<label htmlFor>` elements, controls get stable `useId()` ids, and
the editor textarea gains an `aria-label` referencing the active
example. Code comment explains the version selector is display-only
(the API runs against the bundled binary regardless) and now
round-trips in share links.
Drift / dead-data cleanup:
- `lib/examples.ts` — drop the unused `output: OutputLine[]` field from
every example. The playground renders live runner output, never the
canned arrays; the data was drifting.
- `lib/docs-data.ts` — `DOC_HREF_MAP` now derived from `DOC_PAGES` so
adding a page no longer needs two edits.
- `lib/playground-storage.ts` — validate the localStorage payload's
shape and copy only string-valued entries into a prototype-less map,
rejecting arrays / non-string values.
- `lib/share.ts` — type-check `mode` (and the new `version` field)
before returning them, matching the existing `asi` validation.
Minor correctness:
- `app/robots.ts` — strip trailing slashes from `SITE_URL` so the
sitemap URL never gets a `//`.
- `components/docs.tsx` — explicit `source !== null` check so an empty
synced doc renders via Markdown instead of the not-synced stub.
- `components/install.tsx` — thread the `Method.language` field through
`CopyableCommand` instead of hard-coding `"ts"`.
- `components/markdown.tsx` — case-insensitive `mermaid` fence detection.
- `components/sandbox.tsx` — replace date-coupled `2026-04.json` with
evergreen `transactions.current.json`.
- `lib/highlight.ts` — recognize Pascal block-comment forms (`{...}`
and `(* ... *)`) under `lang === "pascal"`, gated so they don't
swallow C-family object literals.
- `README.md` — fix `app/page.tsx` → `src/app/page.tsx`, drop the
Geist/next/font reference.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (3)
website/src/components/sandbox.tsx (1)
291-306: Simulated output is static — consider clarifying the demo's intent.The success case always emits the same hardcoded JSON regardless of changes to
codeorglobalsText. Given the UI copy "Edit the script, change the injected globals, execute — and see what the host gets back" (lines 389-391), users may expect dynamic output.Since this is a demo simulation rather than a real backend call, consider either:
- Adjusting the UI copy to clarify it's a static preview, or
- Parsing the
globalsTextJSON and computingsummary/countdynamically for a more interactive experience.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/sandbox.tsx` around lines 291 - 306, The simulated success output is hardcoded and doesn't reflect edits to code or globalsText, which contradicts the UI copy; update the simulation in the sandbox component so the output derives from the current code/globalsText instead of static JSON: parse globalsText (handle JSON parse errors) and compute the "summary" array and "count" dynamically (push to the same lines array used currently), or alternatively change the UI copy text to explicitly state this is a static preview; locate the code that builds the lines array (the block that pushes "{", '"summary": [', items, '"count": 5', and the meta line) and either replace the hardcoded items with computed items based on parsed globalsText and code or update the UI string shown to the user accordingly.website/src/lib/highlight.ts (1)
541-584:highlightJsonprovides key/value distinction but isn't wired up in markdown rendering.This function distinguishes JSON keys (followed by
:) from string values, which is better thanhighlightGenericfor JSON. However,markdown.tsxline 34-36 dispatches JSON fences toHighlightedGenericwhich useshighlightGeneric, nothighlightJson.If key/value distinction in JSON code blocks is desired, consider updating the markdown renderer to use
highlightJsonfor the"json"case, or document thathighlightJsonis intended only for the playground's highlighted textarea.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/lib/highlight.ts` around lines 541 - 584, The markdown renderer currently routes JSON fenced code to HighlightedGeneric which calls highlightGeneric; update the renderer (in markdown.tsx) so that when the language equals "json" it uses highlightJson instead of highlightGeneric (or add a distinct HighlightedJson component that calls highlightJson and use that for the "json" case); ensure you reference the existing HighlightedGeneric/ highlightGeneric behavior and replace or branch it to call highlightJson for proper key/value highlighting.website/src/components/markdown.tsx (1)
228-255: Consider adding a note about nested inline formatting in table cells.The table parser correctly handles escaped pipes (
\|), but the inline rendering at line 356 passes cell content throughrenderInline, which doesn't support nested formatting (e.g.,**bold** and *italic*in the same cell will only partially render). This is a minor limitation for docs content.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/markdown.tsx` around lines 228 - 255, The table parser captures cell strings via splitRow (used to build hdr and rows) but downstream rendering uses renderInline which currently doesn't handle nested inline emphasis (e.g., "**bold** and *italic*"), so update the inline renderer to support nested/combined inline formatting or ensure table cell strings are processed by a more complete inline parse: modify the renderInline implementation (or replace its callsite) to parse strong (**) before em (*) and allow nested tokens, or refactor renderInline to run a recursive/ordered parse so splitRow/hdr/rows content renders both bold and italic within the same cell correctly; look for renderInline and splitRow usages in markdown.tsx to apply the fix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@website/src/components/markdown.tsx`:
- Around line 228-255: The table parser captures cell strings via splitRow (used
to build hdr and rows) but downstream rendering uses renderInline which
currently doesn't handle nested inline emphasis (e.g., "**bold** and *italic*"),
so update the inline renderer to support nested/combined inline formatting or
ensure table cell strings are processed by a more complete inline parse: modify
the renderInline implementation (or replace its callsite) to parse strong (**)
before em (*) and allow nested tokens, or refactor renderInline to run a
recursive/ordered parse so splitRow/hdr/rows content renders both bold and
italic within the same cell correctly; look for renderInline and splitRow usages
in markdown.tsx to apply the fix.
In `@website/src/components/sandbox.tsx`:
- Around line 291-306: The simulated success output is hardcoded and doesn't
reflect edits to code or globalsText, which contradicts the UI copy; update the
simulation in the sandbox component so the output derives from the current
code/globalsText instead of static JSON: parse globalsText (handle JSON parse
errors) and compute the "summary" array and "count" dynamically (push to the
same lines array used currently), or alternatively change the UI copy text to
explicitly state this is a static preview; locate the code that builds the lines
array (the block that pushes "{", '"summary": [', items, '"count": 5', and the
meta line) and either replace the hardcoded items with computed items based on
parsed globalsText and code or update the UI string shown to the user
accordingly.
In `@website/src/lib/highlight.ts`:
- Around line 541-584: The markdown renderer currently routes JSON fenced code
to HighlightedGeneric which calls highlightGeneric; update the renderer (in
markdown.tsx) so that when the language equals "json" it uses highlightJson
instead of highlightGeneric (or add a distinct HighlightedJson component that
calls highlightJson and use that for the "json" case); ensure you reference the
existing HighlightedGeneric/ highlightGeneric behavior and replace or branch it
to call highlightJson for proper key/value highlighting.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4a08bbae-e08e-46ed-9bda-48e1b65eeeac
📒 Files selected for processing (17)
website/README.mdwebsite/scripts/sync-docs.mjswebsite/src/app/api/run/route.tswebsite/src/app/robots.tswebsite/src/components/docs.tsxwebsite/src/components/install.tsxwebsite/src/components/markdown.tsxwebsite/src/components/mermaid-block.tsxwebsite/src/components/playground.tsxwebsite/src/components/sandbox.tsxwebsite/src/lib/doc-source.tswebsite/src/lib/docs-data.tswebsite/src/lib/examples.tswebsite/src/lib/highlight.tswebsite/src/lib/playground-storage.tswebsite/src/lib/rate-limit.tswebsite/src/lib/share.ts
✅ Files skipped from review due to trivial changes (5)
- website/README.md
- website/scripts/sync-docs.mjs
- website/src/lib/share.ts
- website/src/lib/rate-limit.ts
- website/src/components/playground.tsx
🚧 Files skipped from review as they are similar to previous changes (4)
- website/src/app/robots.ts
- website/src/components/install.tsx
- website/src/lib/playground-storage.ts
- website/src/app/api/run/route.ts
- `components/highlighted-code.tsx`, `components/markdown.tsx` — add a dedicated `HighlightedJson` component that uses the existing `highlightJson()` (key/value distinction via colon-lookahead, emits `tok-*` classes that match the existing CSS) and route `json` fenced code blocks through it instead of `HighlightedGeneric`. JSON config snippets in the docs now render with proper key vs string colouring. - `components/sandbox.tsx` — clarify the sandbox preview's static-on-success behaviour. The error paths (`fetch`/`eval`/ `process.`/runaway loops, malformed globals JSON) react to edits as before; the success-path JSON is illustrative. Updated heading to "Sandbox preview" and added a pointer to the Playground for real execution. The third nitpick (nested inline emphasis in markdown table cells) was verified against the synced docs corpus and skipped: the example given (`**bold** and *italic*`) already renders correctly as sibling tokens, and a regex sweep finds zero occurrences of actually-nested emphasis (`**bold *italic***`) in any synced doc. Will revisit if real content ever needs it. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@website/src/components/markdown.tsx`:
- Around line 148-169: The fallback anchor rendering in the "link" case
currently injects tok.href directly into <a> (see tok.href and resolveDocHref)
which permits unsafe schemes like "javascript:"; update the fallback branch to
validate and sanitize tok.href before using it: parse the href (e.g., with the
URL constructor or a safe-scheme whitelist) and only allow safe schemes such as
http, https, mailto, tel (and the existing in-page "#" handling); if the href is
invalid or uses a disallowed scheme, render the link text without an href or
escape/omit the link to prevent navigation, and keep using target="_blank" and
rel="noopener noreferrer" only for approved external URLs.
- Around line 275-279: The paragraph continuation guard in parseMarkdown is too
permissive—update the while condition inside parseMarkdown (the loop that
advances i and pushes into buf) to also stop when the current line is a
horizontal rule (match /^---+\s*$/) or when the current/next line starts a table
by calling isTableStart(lines[i], lines[i+1] || ""). Concretely, add an extra
negative condition to the existing regex guard to test for HR lines and call
isTableStart so paragraph parsing won't swallow `---` or table starts; keep the
changes localized to the loop that manipulates i and buf in parseMarkdown.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 37ddba45-4344-44e0-8cac-290ea3e0491e
📒 Files selected for processing (3)
website/src/components/highlighted-code.tsxwebsite/src/components/markdown.tsxwebsite/src/components/sandbox.tsx
✅ Files skipped from review due to trivial changes (1)
- website/src/components/sandbox.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- website/src/components/highlighted-code.tsx
Markdown parser hardening (`components/markdown.tsx`):
- Extract `safeHref(href)` and route the fallback `<a href>` branch
through it. Allowlists `http`, `https`, `mailto`, `tel`, plus in-page
anchors / root-relative paths / bare relative paths. `javascript:`,
`data:`, `vbscript:`, `file:` etc. fall through to plain text — defense
in depth even though the corpus is our own docs.
- Extract `isHr(line)` and `isTableStart(line, next)` and reuse them in
the paragraph-continuation guard. Previously `parseMarkdown` would
swallow a bare `---` HR or a `| col |` table header into the
preceding paragraph; now both correctly close the paragraph.
API runtime robustness (`app/api/run/route.ts`):
- Attach `child.stdin.on("error", ...)`. The child can close stdin
early (parse error, OOM, our timeout-kill, …) and the resulting
EPIPE was previously unhandled, which crashes the request worker.
Silently drop the EPIPE; surface anything else via `console.error`
for the operator. The runner's own `error` / `close` events still
produce the response payload.
Test suite (`src/__tests__/`):
- Set up `bun test` (`@types/bun`, `test` script in package.json).
- `share.test.ts` — base64url round-trip incl. multi-byte code, URL-
safe charset, malformed-base64 rejection, type validation for
optional `mode`/`asi`/`version` fields.
- `github-versions.test.ts` — `pickPrecedenceVersions` against the
worked-out cases (current 0.x, post-1.0, post-2.x, single release,
duplicate input, invalid tags, order-independence).
- `highlight.test.ts` — regression for the `@`-in-Pascal OOM
(`@MyProc` infinite-loop), Pascal `{...}` and `(* *)` block
comments, "no comment swallowing in C-family", JSON key/value
distinction, JSON literal/number tokenization, round-trip invariant.
- `rate-limit.test.ts` — token-bucket window expiry by driving
`Date.now`, monotonic countdown, key isolation, and the `clientIp`
header preference order (cf-connecting-ip → x-real-ip → x-vercel-
forwarded-for → right-most x-forwarded-for).
- `markdown.test.ts` — `safeHref` allowlist, paragraph-vs-HR /
paragraph-vs-table-start segmentation, escaped-pipe table cells.
Export `safeHref`, `parseMarkdown`, and the `Block` type from
`components/markdown.tsx` for testability — they're side-effect-free.
50 tests pass, lint + tsc + build clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
website/src/__tests__/github-versions.test.ts (1)
4-81: Consider a table-driven pattern to reduce test duplication as this matrix grows.Optional, but converting repetitive
test + expectblocks into data-driven cases will make future precedence-rule expansions easier to maintain.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/__tests__/github-versions.test.ts` around lines 4 - 81, The tests duplicate the same pattern for pickPrecedenceVersions; convert them to a table-driven style by creating an array of test cases (each with a descriptive name, input tags array, and expected output) and iterate with Jest's test.each or a simple forEach to run the assertion for each row; target the describe block and replace individual test(...) blocks that call pickPrecedenceVersions(...) with a single parametrized loop using the case entries so future matrix additions only require adding another case with the test name, input, and expected output.website/src/__tests__/share.test.ts (2)
39-47: Extract duplicated base64url JSON encoding test helper.The byte→binary→
btoa→base64url transform is duplicated in two tests. A small helper will reduce drift and keep intent clearer.♻️ Proposed refactor
+const encodeJsonToBase64Url = (value: unknown) => { + const bytes = new TextEncoder().encode(JSON.stringify(value)); + let bin = ""; + for (const byte of bytes) bin += String.fromCharCode(byte); + return btoa(bin).replace(/\+/g, "-").replace(/\//g, "_").replace(/=+$/, ""); +}; + describe("share encoding", () => { @@ test("rejects payload without a string `code` field", () => { - // Encode a payload that's missing `code` entirely. - const bytes = new TextEncoder().encode( - JSON.stringify({ mode: "bytecode" }), - ); - let bin = ""; - for (let i = 0; i < bytes.length; i++) bin += String.fromCharCode(bytes[i]); - const malformed = btoa(bin) - .replace(/\+/g, "-") - .replace(/\//g, "_") - .replace(/=+$/, ""); + const malformed = encodeJsonToBase64Url({ mode: "bytecode" }); expect(decodeShare(malformed)).toBeNull(); }); @@ - const malformed = JSON.stringify({ + const malformed = { code: "ok;", mode: 42, asi: "yes", version: { tag: "v1.0.0" }, - }); - const bytes = new TextEncoder().encode(malformed); - let bin = ""; - for (let i = 0; i < bytes.length; i++) bin += String.fromCharCode(bytes[i]); - const encoded = btoa(bin) - .replace(/\+/g, "-") - .replace(/\//g, "_") - .replace(/=+$/, ""); + }; + const encoded = encodeJsonToBase64Url(malformed);Also applies to: 59-65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/__tests__/share.test.ts` around lines 39 - 47, Extract the duplicated JSON→base64url encoding logic into a small test helper (e.g., encodeJsonBase64Url) and replace both inline blocks (the TextEncoder + loop + btoa + replace sequence used to produce malformed) with calls to that helper; the helper should accept an object or string, perform JSON.stringify when given an object, encode with TextEncoder, build the binary string via String.fromCharCode over bytes, call btoa, then convert to base64url by replacing '+'→'-', '/'→'_', and trimming '=' padding, and update both occurrences (the block starting with new TextEncoder... and the similar block at the other occurrence) to use encodeJsonBase64Url(...) instead.
68-73: Make “dropped optional fields” assertion explicit.At Line 68,
toEqualwithundefinedkeys is less explicit about true key removal. If the intent is “drop fields,” assert absence directly.✅ Suggested assertion tightening
- expect(decoded).toEqual({ - code: "ok;", - mode: undefined, - asi: undefined, - version: undefined, - }); + expect(decoded).not.toBeNull(); + expect(decoded).toMatchObject({ code: "ok;" }); + expect(decoded).not.toHaveProperty("mode"); + expect(decoded).not.toHaveProperty("asi"); + expect(decoded).not.toHaveProperty("version");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/__tests__/share.test.ts` around lines 68 - 73, The test currently asserts dropped optional fields by expecting keys with undefined values on the decoded object; instead make the assertion explicit by keeping the required field check (expect(decoded.code).toBe("ok;")) and replace the undefined-key checks with absence assertions such as expect(decoded).not.toHaveProperty("mode"), expect(decoded).not.toHaveProperty("asi"), and expect(decoded).not.toHaveProperty("version") so the intent “fields are removed” is unambiguous (locate the assertions around the decoded variable in share.test.ts).website/src/components/markdown.tsx (1)
264-272: Consider a more unique placeholder for escaped pipes (optional).Using
String.fromCharCode(1)(SOH control character) as a placeholder works for typical markdown, but if the source ever contains a literal SOH character, it would be incorrectly converted to|during restoration. Since this is a docs site with controlled content, the risk is minimal.For extra robustness, consider a longer sentinel string unlikely to appear in content:
- const PIPE_PLACEHOLDER = String.fromCharCode(1); + const PIPE_PLACEHOLDER = "\x00__PIPE__\x00";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/markdown.tsx` around lines 264 - 272, The current splitRow uses PIPE_PLACEHOLDER = String.fromCharCode(1) which could collide with rare input; replace that single-char sentinel with a longer, unlikely string constant (e.g. '__ESCAPED_PIPE_PLACEHOLDER__') and update PIPE_RESTORE (new RegExp of that constant with 'g') and any uses in splitRow so escaped '\|' are replaced and restored safely; update the identifiers PIPE_PLACEHOLDER, PIPE_RESTORE, and the splitRow logic accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@website/src/__tests__/rate-limit.test.ts`:
- Around line 42-52: The test uses an unbounded while loop calling
rateLimit(key) which can hang if rateLimit never returns ok=false; change the
loop to a bounded loop using the rate-limit window/limit from the first response
(e.g., call let last = rateLimit(key); const max = last.limit ?? 1000; for (let
i = 0; i < max && last.ok; i++) last = rateLimit(key);) so the loop will run at
most the reported limit (with a safe fallback like 1000) and then assert last.ok
is false and last.remaining is 0; apply the same change to the other test that
uses the unbounded while (both referencing rateLimit and uniqueKey).
In `@website/src/app/api/run/route.ts`:
- Around line 217-223: The response currently includes the full server file path
via the variable `binary` in the `transportError` call, which leaks internal
filesystem layout; update the `transportError` invocation in `route.ts` (the
call site using transportError(..., { status: 500, extra: { binary } })) to stop
sending the raw `binary` path to the client—either remove `binary` from `extra`
entirely or replace it with a non-sensitive value (e.g., a boolean flag, an
internal log ID, or only the executable basename) and ensure the full path is
retained only in server-side logs (keep the existing `transportError` call shape
but pass a safe value instead).
- Around line 196-212: The current stdout/stderr handlers use string.length
which counts UTF-16 code units and lets multibyte UTF-8 output exceed
MAX_OUTPUT_BYTES; change the logic in the child.stdout.on("data") and
child.stderr.on("data") handlers to work with raw Buffer byte lengths:
accumulate as Buffer (use Buffer.concat with previous buffer or track byte
length via Buffer.byteLength), check against MAX_OUTPUT_BYTES using
Buffer.byteLength (or buffer.length), truncate by slicing the Buffer to
MAX_OUTPUT_BYTES bytes before converting to utf8 string for storage in
stdoutBuf/stderrBuf, and keep the kill("SIGKILL") call on stdout when the byte
limit is exceeded; reference variables/functions: child.stdout.on,
child.stderr.on, stdoutBuf, stderrBuf, MAX_OUTPUT_BYTES, and chunk.
---
Nitpick comments:
In `@website/src/__tests__/github-versions.test.ts`:
- Around line 4-81: The tests duplicate the same pattern for
pickPrecedenceVersions; convert them to a table-driven style by creating an
array of test cases (each with a descriptive name, input tags array, and
expected output) and iterate with Jest's test.each or a simple forEach to run
the assertion for each row; target the describe block and replace individual
test(...) blocks that call pickPrecedenceVersions(...) with a single
parametrized loop using the case entries so future matrix additions only require
adding another case with the test name, input, and expected output.
In `@website/src/__tests__/share.test.ts`:
- Around line 39-47: Extract the duplicated JSON→base64url encoding logic into a
small test helper (e.g., encodeJsonBase64Url) and replace both inline blocks
(the TextEncoder + loop + btoa + replace sequence used to produce malformed)
with calls to that helper; the helper should accept an object or string, perform
JSON.stringify when given an object, encode with TextEncoder, build the binary
string via String.fromCharCode over bytes, call btoa, then convert to base64url
by replacing '+'→'-', '/'→'_', and trimming '=' padding, and update both
occurrences (the block starting with new TextEncoder... and the similar block at
the other occurrence) to use encodeJsonBase64Url(...) instead.
- Around line 68-73: The test currently asserts dropped optional fields by
expecting keys with undefined values on the decoded object; instead make the
assertion explicit by keeping the required field check
(expect(decoded.code).toBe("ok;")) and replace the undefined-key checks with
absence assertions such as expect(decoded).not.toHaveProperty("mode"),
expect(decoded).not.toHaveProperty("asi"), and
expect(decoded).not.toHaveProperty("version") so the intent “fields are removed”
is unambiguous (locate the assertions around the decoded variable in
share.test.ts).
In `@website/src/components/markdown.tsx`:
- Around line 264-272: The current splitRow uses PIPE_PLACEHOLDER =
String.fromCharCode(1) which could collide with rare input; replace that
single-char sentinel with a longer, unlikely string constant (e.g.
'__ESCAPED_PIPE_PLACEHOLDER__') and update PIPE_RESTORE (new RegExp of that
constant with 'g') and any uses in splitRow so escaped '\|' are replaced and
restored safely; update the identifiers PIPE_PLACEHOLDER, PIPE_RESTORE, and the
splitRow logic accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2b166fae-1b05-4735-b9d6-0db7df8be648
⛔ Files ignored due to path filters (1)
website/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
website/package.jsonwebsite/src/__tests__/github-versions.test.tswebsite/src/__tests__/highlight.test.tswebsite/src/__tests__/markdown.test.tswebsite/src/__tests__/rate-limit.test.tswebsite/src/__tests__/share.test.tswebsite/src/app/api/run/route.tswebsite/src/components/markdown.tsx
✅ Files skipped from review due to trivial changes (1)
- website/package.json
…st hardening
API runtime (`app/api/run/route.ts`):
- **Byte-counted output limits.** `chunk.toString("utf8")` followed by
`string.length` was counting UTF-16 code units, so multibyte UTF-8
output (emoji, non-Latin scripts) could overshoot `MAX_OUTPUT_BYTES`
by up to 4× before being noticed. Switch both stdout/stderr handlers
to accumulate `Buffer[]`, track `Buffer.length` byte counts, slice on
byte boundaries when over the cap (subarray, not String.slice — so we
don't split mid-codepoint), and decode once via `Buffer.concat(...)
.toString("utf8")` in the `close` handler. Symmetric to the
`Buffer.byteLength(code, "utf8")` fix already applied to the input
side.
- **Redact server file path from spawn-error response.** The 500
response on `child.on("error", ...)` was sending the resolved
absolute `binary` path back to the client, which leaks the deploy's
filesystem layout. Now `extra.binary` is just `basename(binary)`
(e.g., `"GocciaScriptLoader"`); the full path is logged server-side
via `console.error` for the operator.
Test hardening:
- `rate-limit.test.ts` — replace unbounded `while (last.ok)` loops with
a `drainBucket(key)` helper that bounds iteration at
`result.limit + 16` slack. A counter regression in `rateLimit` would
previously hang the test suite indefinitely; now it surfaces as an
`ok: true` assertion failure.
- `share.test.ts` — extract the duplicated TextEncoder + btoa +
base64url-replace dance into `encodeJsonBase64Url(value)`, used by
the two negative-path tests that need to inject malformed payloads
that `encodeShare` can't produce.
Markdown parser (`components/markdown.tsx`):
- Replace the `String.fromCharCode(1)` table-cell pipe sentinel with a
longer, unlikely string `"__GOCCIA_MD_ESCAPED_PIPE__"`. Collisions
were already implausible against the synced docs, but the longer
sentinel makes the safety more robust to future content.
50 tests pass, lint + tsc + build clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
website/src/components/markdown.tsx (1)
162-173: Image relative path resolution is limited to a single hard-coded case.Only
./logo.pngis remapped to/logo.png. Other relative image paths (e.g.,./images/diagram.png) will be rendered as-is and may not resolve correctly depending on the page URL structure.Consider a more general approach if docs may include other relative images:
🔧 Suggested generalization
case "image": { let src = tok.src; - if (src === "./logo.png") src = "/logo.png"; + // Resolve relative paths (./foo.png or ../foo.png) to root-relative + if (src.startsWith("./")) src = src.slice(1); + else if (src.startsWith("../")) src = "/" + src.replace(/^(\.\.\/)+/, ""); return (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/markdown.tsx` around lines 162 - 173, The image handling in the "image" case only rewrites a single hard-coded path (`./logo.png`) so other relative image paths fail; update the logic that reads `tok.src` (the `src` variable in the "image" branch) to detect relative paths (e.g., those starting with "./" or "../") and resolve them to absolute URLs before rendering (for example by resolving against the current page/base using the browser URL API or site base path), then use that resolved value for `src` in the returned <img>; keep references to `tok.src`, the `src` local, and the "image" case so reviewers can find and verify the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@website/src/components/markdown.tsx`:
- Around line 162-173: The image handling in the "image" case only rewrites a
single hard-coded path (`./logo.png`) so other relative image paths fail; update
the logic that reads `tok.src` (the `src` variable in the "image" branch) to
detect relative paths (e.g., those starting with "./" or "../") and resolve them
to absolute URLs before rendering (for example by resolving against the current
page/base using the browser URL API or site base path), then use that resolved
value for `src` in the returned <img>; keep references to `tok.src`, the `src`
local, and the "image" case so reviewers can find and verify the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1437b1d3-43b9-4990-9571-f6d7d7203518
📒 Files selected for processing (4)
website/src/__tests__/rate-limit.test.tswebsite/src/__tests__/share.test.tswebsite/src/app/api/run/route.tswebsite/src/components/markdown.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- website/src/tests/rate-limit.test.ts
- website/src/tests/share.test.ts
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
website/src/components/markdown.tsx (1)
72-77:⚠️ Potential issue | 🟠 MajorHarden
safeHrefagainst whitespace/control-character scheme bypass.At Line 76, untrimmed values like
" javascript:alert(1)"can bypass the scheme allowlist and still become navigable links in the fallback branch.🔒 Suggested hardening
-export function safeHref(href: string): string | null { - if (!href) return null; - if (href.startsWith("#") || href.startsWith("/")) return href; - const m = href.match(/^([a-z][a-z0-9+.-]*):/i); - if (!m) return href; // bare relative path or fragment-relative - return SAFE_HREF_SCHEMES.has(m[1].toLowerCase()) ? href : null; +export function safeHref(rawHref: string): string | null { + const href = rawHref.trim(); + if (!href) return null; + if (/[\u0000-\u001F\u007F]/.test(href)) return null; + if (href.startsWith("#") || href.startsWith("/")) return href; + const m = href.match(/^([a-z][a-z0-9+.-]*):/i); + if (!m) return href; // bare relative path or fragment-relative + return SAFE_HREF_SCHEMES.has(m[1].toLowerCase()) ? href : null; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/markdown.tsx` around lines 72 - 77, The safeHref function is vulnerable to whitespace/control-character scheme bypasses (e.g., " javascript:..."); fix by normalizing and validating the input first: trim the href (const h = href?.trim()), return null if empty or if it contains control/forbidden characters (e.g., test with /[\u0000-\u001F\u007F]/ or leading whitespace), then perform the checks using the trimmed h (check h.startsWith("#") or h.startsWith("/"), run the scheme regex against h, and only allow schemes present in SAFE_HREF_SCHEMES; ensure you reference and update the safeHref function and its regex usage to operate on the trimmed/validated value.
🧹 Nitpick comments (1)
website/src/components/markdown.tsx (1)
341-367: Generate unique heading IDs for repeated titles.Using
slugify(b.text)directly can produce duplicateids when headings repeat (e.g., multiple “Examples”), which breaks deterministic anchor navigation and creates invalid duplicate IDs.♻️ Suggested dedupe approach
export function Markdown({ source }: { source: string }) { const blocks = useMemo(() => parseMarkdown(source), [source]); + const slugCounts = new Map<string, number>(); @@ - const slug = slugify(b.text); + const baseSlug = slugify(b.text); + const seen = slugCounts.get(baseSlug) ?? 0; + slugCounts.set(baseSlug, seen + 1); + const slug = seen === 0 ? baseSlug : `${baseSlug}-${seen}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/markdown.tsx` around lines 341 - 367, The current heading ID generation uses slugify(b.text) directly which can produce duplicate IDs for repeated headings; update the logic around the slug variable (where slugify(b.text) is called and used as the id on the <h2>/<h3>/<h4> and the anchor href) to dedupe duplicates by maintaining a local map/counter keyed by the base slug and appending a numeric suffix (e.g., "-1", "-2") for repeats so each generated id is unique and the anchor href matches the final id; ensure the same dedupe logic is applied for all branches that render headings (the places returning <h2>, <h3>, and the <h4> fallback) so anchors remain deterministic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@website/src/app/api/run/route.ts`:
- Around line 152-185: Add abort handling so the spawned child is killed
immediately when the client disconnects: attach
req.signal.addEventListener("abort", ...) after creating child (spawn(...)) to
call child.kill("SIGKILL") and then call finish(...) to resolve the Promise;
ensure the listener reference is removed inside finish() to avoid leaks. Update
every spawn usage in this route (the variables/functions involved are spawn,
binary, args, child, finish, resolved) to register the same abort handler and to
remove it in finish() so the child is always cleaned up on client abort.
- Around line 106-133: The handler currently calls req.json() (parsing the whole
body) before enforcing MAX_CODE_BYTES, allowing oversized payloads to be
buffered; change the request handling in route.ts so you check and enforce size
limits before parsing: first inspect req.headers.get("content-length") and
immediately return transportError with code "CODE_TOO_LARGE" if it exceeds
MAX_CODE_BYTES (or absent/too large and you opt to reject), and for
chunked/unknown lengths read req.body as a stream and accumulate up to
MAX_CODE_BYTES (rejecting further bytes) before decoding to JSON into RunBody;
ensure all early returns use the existing transportError pattern and keep
subsequent logic that computes codeBytes and validates code unchanged.
In `@website/src/components/markdown.tsx`:
- Around line 276-283: The header parsing currently uses hdr =
splitRow(line).filter(Boolean) which removes all empty header cells and can
desynchronize columns versus the rows array (which strips only outer empty
cells). Change the header trimming to use the same outer-pipe-only filter as the
rows (i.e. use splitRow(line).filter((c, idx, arr) => !(idx === 0 && c === "")
&& !(idx === arr.length - 1 && c === ""))) so hdr and rows use identical
trimming logic; update references to hdr, splitRow, and rows accordingly.
---
Duplicate comments:
In `@website/src/components/markdown.tsx`:
- Around line 72-77: The safeHref function is vulnerable to
whitespace/control-character scheme bypasses (e.g., " javascript:..."); fix by
normalizing and validating the input first: trim the href (const h =
href?.trim()), return null if empty or if it contains control/forbidden
characters (e.g., test with /[\u0000-\u001F\u007F]/ or leading whitespace), then
perform the checks using the trimmed h (check h.startsWith("#") or
h.startsWith("/"), run the scheme regex against h, and only allow schemes
present in SAFE_HREF_SCHEMES; ensure you reference and update the safeHref
function and its regex usage to operate on the trimmed/validated value.
---
Nitpick comments:
In `@website/src/components/markdown.tsx`:
- Around line 341-367: The current heading ID generation uses slugify(b.text)
directly which can produce duplicate IDs for repeated headings; update the logic
around the slug variable (where slugify(b.text) is called and used as the id on
the <h2>/<h3>/<h4> and the anchor href) to dedupe duplicates by maintaining a
local map/counter keyed by the base slug and appending a numeric suffix (e.g.,
"-1", "-2") for repeats so each generated id is unique and the anchor href
matches the final id; ensure the same dedupe logic is applied for all branches
that render headings (the places returning <h2>, <h3>, and the <h4> fallback) so
anchors remain deterministic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 59be8686-edb5-4925-89b9-96d1b0687d59
📒 Files selected for processing (4)
website/src/__tests__/rate-limit.test.tswebsite/src/__tests__/share.test.tswebsite/src/app/api/run/route.tswebsite/src/components/markdown.tsx
✅ Files skipped from review due to trivial changes (2)
- website/src/tests/share.test.ts
- website/src/tests/rate-limit.test.ts
…robustness
API runtime (`app/api/run/route.ts`):
- **Enforce body size before parsing.** `req.json()` would buffer
arbitrarily large payloads before our per-field check fired — DoS
vector. Now: check `Content-Length` header first (reject 413 on
oversize), then stream-read `req.body` via `readBodyWithCap()` with
a `MAX_BODY_BYTES` cap (`MAX_CODE_BYTES + 1 KB envelope overhead`),
cancelling the reader on overflow. JSON parse runs only on the
capped string.
- **Kill the child on client abort.** Previously, if the client
disconnected mid-execution, the spawned loader kept running until
the watchdog timer fired (TIMEOUT_MS + 1.5 s), wasting CPU and
rate-limit budget. Now `req.signal.addEventListener("abort", ...)`
kills with SIGKILL and resolves the Promise with a 499 response;
the listener is removed in `finish()` so it never leaks across
requests when the child finishes naturally first. Includes a
`req.signal.aborted` early-out for already-aborted requests.
Markdown parser (`components/markdown.tsx`):
- **`safeHref` whitespace / control-character bypass.** Browsers
strip TAB/CR/LF/leading whitespace from `href` values before
navigation, so the previous scheme regex would let
`" javascript:..."` and `"j\tavascript:..."` slip through and fire
the dangerous scheme. Now: scan for any C0 control char (0x00–0x1F)
or DEL (0x7F) up front and reject; trim whitespace before running
scheme validation. Fixes the input-cleanup mismatch with the
browser's stripping algorithm.
- **Header-cell filter desynced columns.** The header used
`filter(Boolean)` (drops ALL empty cells) while rows used a
positional outer-only filter — `| a | | b |` produced a 2-column
header against a 3-column row. Extracted `stripOuterEmpty(cells)`
helper used by both.
- **Heading ID uniqueness.** `slugify(b.text)` would emit duplicate
`id` attributes for repeated headings (anchors then jump only to
the first). New `seenSlugs` map appends `-2`, `-3`, … to repeated
base slugs; the rendered anchor href stays in sync with the
emitted id.
Tests (`src/__tests__/markdown.test.ts`):
- Added `safeHref` cases for whitespace-prefixed schemes, embedded
control chars, whitespace-trimming around safe schemes, and
whitespace-only input rejection.
- Added a table-column-parity test for intentional empty header
cells (the `| a | | b |` case).
55 tests pass, lint + tsc + build clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
website/src/app/api/run/route.ts (1)
93-96: Unnecessary intermediate Buffer allocations inreadBodyWithCap.
Buffer.concat()acceptsUint8Array[]directly, so the.map((c) => Buffer.from(c))creates an extra intermediate array and redundant per-chunk Buffer allocations on every request.♻️ Suggested simplification
- return Buffer.concat(chunks.map((c) => Buffer.from(c))).toString("utf8"); + return Buffer.concat(chunks).toString("utf8");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/app/api/run/route.ts` around lines 93 - 96, The function readBodyWithCap is creating unnecessary per-chunk Buffer allocations by mapping chunks to Buffer.from before calling Buffer.concat; instead, pass the existing Uint8Array[] (the chunks array) directly to Buffer.concat and return .toString("utf8") (i.e., replace Buffer.concat(chunks.map(...)) with Buffer.concat(chunks)). Ensure the variable chunks remains a Uint8Array[] (or Buffer[]) so Buffer.concat accepts it directly and remove the .map((c) => Buffer.from(c)) call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@website/src/app/api/run/route.ts`:
- Around line 93-96: The function readBodyWithCap is creating unnecessary
per-chunk Buffer allocations by mapping chunks to Buffer.from before calling
Buffer.concat; instead, pass the existing Uint8Array[] (the chunks array)
directly to Buffer.concat and return .toString("utf8") (i.e., replace
Buffer.concat(chunks.map(...)) with Buffer.concat(chunks)). Ensure the variable
chunks remains a Uint8Array[] (or Buffer[]) so Buffer.concat accepts it directly
and remove the .map((c) => Buffer.from(c)) call.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f5b7c2ae-9f35-4a53-b9c2-8f6d5117023b
📒 Files selected for processing (3)
website/src/__tests__/markdown.test.tswebsite/src/app/api/run/route.tswebsite/src/components/markdown.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- website/src/tests/markdown.test.ts
- website/src/components/markdown.tsx
`Buffer.concat` accepts `Uint8Array[]` directly (Buffer extends Uint8Array), so the per-chunk `chunks.map((c) => Buffer.from(c))` was copying every stream chunk into a new Buffer before the final concat copy. Pass `chunks` (already `Uint8Array[]`) straight to `Buffer.concat` — one allocation instead of N+1. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- **`formatStars` matches GitHub's own button**: drop trailing `.0` so integer-thousands render `1k` / `2k` / `10k` instead of `1.0k`, while fractional thousands keep one decimal (`2.4k`, `85.3k`). Same rule applied to the millions branch (`1m`, `1.2m`, `85.3m`). New `formatStars` test block in `github-versions.test.ts` covers the boundary cases (`< 1k`, integer multiples, rounding at 1049 vs 1050, millions). - **Reframe the audience headline**: "designed for AI agents and embedded apps" → "designed for tinkerers, embedding and AI agents" across every surface that carried the old framing — hero lede in `landing.tsx`, OG/Twitter image text in `opengraph-image.tsx`, root `layout.tsx` description (page meta + OG + Twitter), `/sandbox` page meta. The narrower "Sandbox · AI agents" section kicker stays as-is — that section is specifically about the agent use case. - **Simpler version display on /install**: replace the boxed "LATEST RELEASE — v0.6.1 — Apr 25, 2026" pill with a plain prose line "Latest version v0.6.1 released April 25, 2026", with the version itself a subtle underlined link to the release page. The GitHub-related action links (Release notes, Source on GitHub) move to a separate flex row beneath the version line. Less chrome, reads naturally. - **Pre-1.0 banner width**: cap `.prestable-banner` at `max-width: 70ch` to match `.install-method p` below — banner now shares a prose column with the rest of the install-page text instead of spanning to full container width and looking stranded between the narrow section head and the structured method cards. 60 tests pass, lint + tsc + build clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
website/src/app/globals.css (2)
1118-1127: Duplicate rule blocks for.bi-tooltip code.These two consecutive rule blocks target the same selector and can be consolidated into a single rule.
♻️ Suggested consolidation
.bi-tooltip code { white-space: pre; -} -.bi-tooltip code { font-family: inherit; background: transparent; border: 0; padding: 0; color: inherit; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/app/globals.css` around lines 1118 - 1127, The CSS contains two consecutive rule blocks for the same selector ".bi-tooltip code"; consolidate them into a single rule by merging both blocks' declarations (white-space, font-family, background, border, padding, color) into one ".bi-tooltip code" rule to remove duplication and keep styles identical.
2664-2664: Keyframe names use camelCase instead of kebab-case.The linter flags
copiedPop,routeIn,archStageIn, andtccDrawInas not following the kebab-case convention. While camelCase is valid CSS, kebab-case is the conventional pattern for CSS identifiers and matches the naming style used throughout the rest of this stylesheet (e.g.,.route-anim,.arch-detail-anim).This is a stylistic consistency issue and can be deferred if preferred.
Also applies to: 2697-2697, 2718-2718, 2833-2833
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/app/globals.css` at line 2664, Keyframe names are using camelCase (copiedPop, routeIn, archStageIn, tccDrawIn); rename these `@keyframes` to kebab-case (copied-pop, route-in, arch-stage-in, tcc-draw-in) and update every reference to them (animation, animation-name, or shorthand uses) to the new kebab-case identifiers so names stay consistent with the stylesheet convention; search for occurrences of "copiedPop", "routeIn", "archStageIn", and "tccDrawIn" and replace both the `@keyframes` declarations and all usages (e.g., in classes like .route-anim, .arch-detail-anim or any animation: declarations).website/src/components/install.tsx (1)
68-116: Clipboard copy logic is duplicated withCurlInstallinlanding.tsx.Both
CopyableCommandhere andCurlInstallinlanding.tsx(lines 40-96) implement nearly identical clipboard copy patterns with thecopyTickstate, timeout cleanup, andnavigator.clipboard?.writeText. Consider extracting a shareduseCopyToClipboardhook to reduce duplication.♻️ Example shared hook
// In a new file: website/src/hooks/use-copy-to-clipboard.ts import { useEffect, useState } from "react"; export function useCopyToClipboard(resetDelay = 1500) { const [copyTick, setCopyTick] = useState(0); const copy = async (text: string) => { let ok = false; try { if (navigator.clipboard?.writeText) { await navigator.clipboard.writeText(text); ok = true; } } catch {} if (ok) setCopyTick((t) => t + 1); return ok; }; useEffect(() => { if (copyTick === 0) return; const id = setTimeout(() => setCopyTick(0), resetDelay); return () => clearTimeout(id); }, [copyTick, resetDelay]); return { copied: copyTick > 0, copyTick, copy }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/install.tsx` around lines 68 - 116, The clipboard-copy logic is duplicated between CopyableCommand and CurlInstall; extract a reusable hook (e.g., useCopyToClipboard) that exposes copy(text: string) and copied (boolean) and an optional resetDelay, implement the timeout/cleanup and navigator.clipboard.writeText inside that hook, place it in a hooks file (e.g., use-copy-to-clipboard), then refactor CopyableCommand and CurlInstall to import and call useCopyToClipboard (remove local copyTick state and useEffect in both components) so both components call copy(command) and read copied for the UI.website/src/components/landing.tsx (2)
30-38: Inconsistent function signatures inFEATURE_ICONSmapping.The
dropentry uses an arrow function wrapper({ size }) => <DropIcon size={size} />, whileshield,leaf, andclockreference the components directly. This works because all icons accept{ size?: number }, but the inconsistency is confusing. Consider making all entries consistent.♻️ Proposed fix for consistency
const FEATURE_ICONS: Record< FeatureIcon, (p: { size?: number }) => React.ReactElement > = { - drop: ({ size }) => <DropIcon size={size} />, + drop: DropIcon, shield: ShieldIcon, leaf: LeafIcon, clock: ClockIcon, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/landing.tsx` around lines 30 - 38, FEATURE_ICONS has inconsistent entries: `drop` is a function wrapper while `shield`, `leaf`, and `clock` are direct component references. Make them consistent by using the same shape for every entry (either all functions of signature (p: { size?: number }) => React.ReactElement or all direct references). Update the mapping for FEATURE_ICONS so ShieldIcon, LeafIcon, and ClockIcon are wrapped like drop (e.g., shield: ({ size }) => <ShieldIcon size={size} />) or change drop to a direct reference DropIcon to match the others; ensure the mapping type (Record<FeatureIcon, (p: { size?: number }) => React.ReactElement>) still matches.
118-131: Using deprecateddocument.execCommandas a reasonable fallback.
document.execCommandis deprecated per MDN, but remains widely supported across all modern browsers (Chrome, Firefox, Safari, Edge as of 2026). While theoretically subject to removal, browsers maintain it for web compatibility reasons. The current implementation properly handles it as a fallback only whennavigator.clipboardis unavailable, preserving selection state appropriately. Adding a comment documenting this intentional compatibility layer would improve code clarity without requiring changes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/landing.tsx` around lines 118 - 131, The code uses document.execCommand("copy") as a deliberate fallback when navigator.clipboard is unavailable (inside the block checking taRef.current and setting selection), so add an inline comment above that fallback explaining that execCommand is deprecated but intentionally retained for broad browser compatibility; reference the taRef.current handling, preservation of selection via prevStart/prevEnd and the post-success call to setCopyTick to make clear this is a compatibility layer rather than a bug.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@website/src/app/globals.css`:
- Around line 2929-2931: Replace the deprecated CSS declaration "word-break:
break-word" with a modern overflow-wrap property: locate the rule containing
"word-break: break-word" (near the "white-space: pre-wrap;" declaration) and
remove that line, then add "overflow-wrap: break-word;" or "overflow-wrap:
anywhere;" depending on desired wrapping behavior so the selector uses
overflow-wrap instead of the deprecated word-break value.
In `@website/src/app/layout.tsx`:
- Around line 33-37: The code currently constructs metadata.metadataBase with
new URL(SITE_URL) which will throw at module init if NEXT_PUBLIC_SITE_URL is
malformed; update the initialization around the SITE_URL constant and the
metadata object so you attempt to construct a URL in a try/catch (or use
URL-safe parsing) and fall back to a safe default (e.g., a hardcoded origin)
when new URL(SITE_URL) fails, ensuring metadata.metadataBase receives a valid
URL object; reference the SITE_URL constant, the metadata export, and the
metadataBase assignment so you only change the parsing logic and fallback
behavior.
---
Nitpick comments:
In `@website/src/app/globals.css`:
- Around line 1118-1127: The CSS contains two consecutive rule blocks for the
same selector ".bi-tooltip code"; consolidate them into a single rule by merging
both blocks' declarations (white-space, font-family, background, border,
padding, color) into one ".bi-tooltip code" rule to remove duplication and keep
styles identical.
- Line 2664: Keyframe names are using camelCase (copiedPop, routeIn,
archStageIn, tccDrawIn); rename these `@keyframes` to kebab-case (copied-pop,
route-in, arch-stage-in, tcc-draw-in) and update every reference to them
(animation, animation-name, or shorthand uses) to the new kebab-case identifiers
so names stay consistent with the stylesheet convention; search for occurrences
of "copiedPop", "routeIn", "archStageIn", and "tccDrawIn" and replace both the
`@keyframes` declarations and all usages (e.g., in classes like .route-anim,
.arch-detail-anim or any animation: declarations).
In `@website/src/components/install.tsx`:
- Around line 68-116: The clipboard-copy logic is duplicated between
CopyableCommand and CurlInstall; extract a reusable hook (e.g.,
useCopyToClipboard) that exposes copy(text: string) and copied (boolean) and an
optional resetDelay, implement the timeout/cleanup and
navigator.clipboard.writeText inside that hook, place it in a hooks file (e.g.,
use-copy-to-clipboard), then refactor CopyableCommand and CurlInstall to import
and call useCopyToClipboard (remove local copyTick state and useEffect in both
components) so both components call copy(command) and read copied for the UI.
In `@website/src/components/landing.tsx`:
- Around line 30-38: FEATURE_ICONS has inconsistent entries: `drop` is a
function wrapper while `shield`, `leaf`, and `clock` are direct component
references. Make them consistent by using the same shape for every entry (either
all functions of signature (p: { size?: number }) => React.ReactElement or all
direct references). Update the mapping for FEATURE_ICONS so ShieldIcon,
LeafIcon, and ClockIcon are wrapped like drop (e.g., shield: ({ size }) =>
<ShieldIcon size={size} />) or change drop to a direct reference DropIcon to
match the others; ensure the mapping type (Record<FeatureIcon, (p: { size?:
number }) => React.ReactElement>) still matches.
- Around line 118-131: The code uses document.execCommand("copy") as a
deliberate fallback when navigator.clipboard is unavailable (inside the block
checking taRef.current and setting selection), so add an inline comment above
that fallback explaining that execCommand is deprecated but intentionally
retained for broad browser compatibility; reference the taRef.current handling,
preservation of selection via prevStart/prevEnd and the post-success call to
setCopyTick to make clear this is a compatibility layer rather than a bug.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e24b84d3-5b22-4f9d-b568-6fca1bdb2562
📒 Files selected for processing (8)
website/src/__tests__/github-versions.test.tswebsite/src/app/globals.csswebsite/src/app/layout.tsxwebsite/src/app/opengraph-image.tsxwebsite/src/app/sandbox/page.tsxwebsite/src/components/install.tsxwebsite/src/components/landing.tsxwebsite/src/lib/github.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- website/src/app/sandbox/page.tsx
- website/src/lib/github.ts
- Promote the Quick install one-liner from the methods list into a prominent right-side card (`<aside class="install-quick">`) shown alongside the page header. The right half of the page was empty whitespace at desktop widths; the card now occupies it as a sticky element (top: 5rem) so the recommended install path stays in view while the user scrolls through the alternatives. - Add a Windows PowerShell installer to the Quick install card: `irm https://gocciascript.dev/install.ps1 | iex` — the standard Windows equivalent of the macOS/Linux `curl … | sh` pattern. Both appear stacked in the card with `macOS / Linux` and `Windows · PowerShell` labels. - Add a Windows package-manager method (`Windows · Scoop`) to the methods list below, between the Linux apt entry and pre-built binaries — gives Windows users a managed-package option that stays on the latest stable. - New `.install-hero` CSS grid: `minmax(0, 1fr)` for the prose column, `clamp(320px, 36%, 440px)` for the Quick install card, with a 900px breakpoint that collapses to single-column for mobile/tablet (and disables the sticky card so it doesn't trap above the fold on narrow viewports). Verification: lint + tsc + build clean, 60 tests pass. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- **Quick install card** gains a third row "Run without installing" with `npx gocciascript script.js` as the headline command. A small hint line below mentions the Bun and pnpm equivalents (`bunx gocciascript`, `pnpm dlx gocciascript`) so users on those toolchains see their exact incantation without leaving the card. - **`npm · Bun · pnpm` method** (between Scoop and Pre-built binaries) shows the global-install commands for all three managers in one block, comment-separated. The npm package downloads the right native binary for the user's OS/arch on install — same pattern as esbuild/swc/biome. - **`Run without installing — npx · bunx · pnpm dlx` method** mirrors the Quick install card's third row but with all three managers' commands stacked, plus a description explaining when to use this path (one-off scripts, CI, no global install). The first invocation populates the manager's cache; subsequent invocations are essentially free. - New `.install-quick-hint` style for the small descriptive line under Quick install rows; renders inline `code` segments parsed from backtick-delimited spans in the hint text. Verification: lint + tsc + build clean, 60 tests pass. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
website/src/components/install.tsx (1)
147-150: Expose copy success to assistive tech.On Line 147-150, the “copied” feedback is visual-only. Add a polite live status and/or dynamic aria-label so screen-reader users get confirmation too.
Suggested accessibility tweak
<button type="button" className="install-block-copy" onClick={copy} title="Copy" - aria-label="Copy command" + aria-label={copied ? "Command copied" : "Copy command"} > {copied ? ( - <span key={copyTick} className="copied-flash"> + <span + key={copyTick} + className="copied-flash" + role="status" + aria-live="polite" + > copied </span> ) : (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/install.tsx` around lines 147 - 150, The "copied" feedback span (the conditional that renders the span with key={copyTick} and className="copied-flash") is visual-only; modify that element so screen readers receive the update by adding an accessible live region (e.g., role="status" or aria-live="polite") and/or a dynamic aria-label that includes the copy confirmation (use copyTick or copied state to change the content). Ensure the span remains visually the same but has the live region attributes so assistive tech announces the "copied" message when the span is rendered.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@website/src/components/install.tsx`:
- Around line 213-220: The use of toLocaleDateString(undefined, ...) on
release.publishedAt causes locale/timezone-dependent output and hydration
mismatches; replace that expression with a deterministic format computed on the
server (or at render time) such as an ISO-based date string (e.g., YYYY-MM-DD)
or a server-side Intl.DateTimeFormat with a fixed locale/timezone, and render
that string (and include a time element with dateTime set to the original ISO
date if needed). Update the occurrence of new
Date(release.publishedAt).toLocaleDateString(...) in the Install component so it
uses the deterministic formatted value instead of toLocaleDateString to avoid
client/server differences.
---
Nitpick comments:
In `@website/src/components/install.tsx`:
- Around line 147-150: The "copied" feedback span (the conditional that renders
the span with key={copyTick} and className="copied-flash") is visual-only;
modify that element so screen readers receive the update by adding an accessible
live region (e.g., role="status" or aria-live="polite") and/or a dynamic
aria-label that includes the copy confirmation (use copyTick or copied state to
change the content). Ensure the span remains visually the same but has the live
region attributes so assistive tech announces the "copied" message when the span
is rendered.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4b9f1756-93e3-4a64-b512-0afcc29af7f0
📒 Files selected for processing (2)
website/src/app/globals.csswebsite/src/components/install.tsx
- **Quick install card**: drop the "Run without installing" row. The
card is back to two one-liners (curl-pipe-sh + irm-pipe-iex), keeping
it focused on the recommended platform installers. Removed the now-
unused `hint` field from the QUICK_INSTALLS shape and the
`.install-quick-hint` CSS.
- **Brand icons** in `components/icons.tsx`: `NpmIcon`, `BunIcon`,
`PnpmIcon` using the official simple-icons paths (CC0). All filled
with `currentColor` so the consuming `.pkg-tab` class can paint each
icon in its brand color and dim/gray the inactive tabs.
- **`<PackageManagerTabs>`**: tabbed picker for npm / Bun / pnpm in
the docs-site style (sli.dev / vite.dev). Each tab renders the
brand icon next to the manager name; the active tab gets the
brand color (`#cb3837` for npm, `#f7a8b8` for Bun, `#f9ad00` for
pnpm). Selection persists in `localStorage` keyed by `storageKey`
and is synchronized between tab groups on the same page via a
custom `pm-tabs-sync` event — flipping the picker on the install
card updates the run card to match.
- Two new sections sit between Scoop and Pre-built binaries in the
methods list:
- **JavaScript package managers** (`#npm`) — `npm install -g …`
/ `bun install -g …` / `pnpm add -g …`
- **Run without installing** (`#npx`) — `npx gocciascript …` /
`bunx gocciascript …` / `pnpm dlx gocciascript …`
Both use `<PackageManagerTabs storageKey="goccia.install.pm">` so
the user picks their PM once and it sticks.
Verification: lint + tsc + build clean, 60 tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Design Principles heading was wrapping because `.section-head` caps its content at `max-width: 60ch`, and at the clamped h2 font size (`clamp(2rem, 4vw, 3.25rem)` → up to 52px) the 38-char tagline renders wider than 60ch on desktop. New opt-in modifier `.anchor-heading.hd-no-wrap` lifts the column cap for the heading specifically and pins it to a single line; the description below still inherits the 60ch prose-flow constraint. A `max-width: 700px` media query falls back to normal wrapping on phones so the line doesn't overflow the viewport. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The install h1 was rendering "GocciaScript" as plain text — out of step with the rest of the site, where the wordmark is rendered as `Goccia<em>Script</em>` with the italic accent (nav, OG image, landing hero variants). - `components/install.tsx` — h1 now uses the same markup pattern (`Get Goccia<em>Script</em> on your machine.`). - `app/globals.css` — added `.section-head h1 em` / `h2 em` rule that applies italic + `var(--accent)` color, matching the nav `.brand em` treatment. Targeted at section-head specifically so emphasis in prose elsewhere (markdown body, etc.) keeps its default behaviour. Verification: lint + tsc + build clean, 60 tests pass. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- **Unify Quick install into one tabbed block.** Replace the two
stacked rows (`macOS / Linux` + `Windows · PowerShell`) with a
single `<CommandTabs>` exposing macOS, Linux, and Windows as three
separate tabs. macOS and Linux share the curl one-liner but get
their own tabs so users see their platform reflected explicitly.
- **Kill the desktop horizontal scrollbar.** The 0.85rem mono font
on the standard `.install-block-pre` made the curl URL overflow
the narrow Quick install card (clamp 320–440px). Added an
install-quick-scoped override that tightens font to 0.72rem and
padding accordingly — keeps the entire one-liner on one visible
line at all desktop widths.
- **Reorder methods**: "Run without installing" (`#npx`) now sits
before "JavaScript package managers" (`#npm`) — the lower-friction,
zero-cost option leads.
- **Refactor**: extracted `<CommandTabs>` from the old
`<PackageManagerTabs>` so the same component renders both OS and
PM pickers. Brand colors moved from `.pkg-tab-{key}` class
selectors to `[data-key="..."]` attribute selectors so the
component doesn't have to thread per-key class names. Sync events
renamed `pm-tabs-sync` → `cmd-tabs-sync`. Two storage keys live in
parallel — `goccia.install.os` for the OS picker,
`goccia.install.pm` for the PM picker — so the two groups don't
cross-talk.
- The Quick install card's tab row spans the full card width now
with `flex: 1 1 0` per tab, so the three tabs distribute evenly
across the card edge-to-edge.
Verification: lint + tsc + build clean, 60 tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…hter, OS auto-detect
Layout — proper 50/50 hero, OS tabs as featured method:
- Drop the floating right-side `.install-quick` aside entirely. The
hero is now a true `1fr 1fr` split: heading + intro on the left,
pre-1.0 banner + version meta + GitHub action links on the right.
- Move the OS-tabbed installer (was in the Quick install card) into
the main `.install-methods` flow as the **featured first method**
(`/install#quick`). Soft accent backdrop + stronger border distinguish
it from the deeper-dive methods that follow.
- The horizontal scrollbar on the Quick install commands is gone
because the tabbed installer now occupies the full method-column
width (~1140px container - 1.5rem padding) instead of the previous
clamp(320px, 36%, 440px) right-card width — the curl URL fits with
room to spare at the standard 0.85rem font size.
OS auto-detect:
- New `detectOs(ua)` reads `navigator.userAgent` and picks
`macos | linux | windows`. Runs in a `useEffect` after hydration so
SSR output is stable (`macos` for everyone, then the client overrides).
- `<CommandTabs initialKey={...}>` now applies a per-render default
when no `localStorage` selection exists. Persisted user choice
always wins so manual overrides aren't clobbered.
OS brand icons:
- `AppleIcon`, `LinuxIcon`, `WindowsIcon` added to `components/icons.tsx`
(simple-icons paths for Apple + Linux/Tux; Windows-10/11 angled
4-pane logo). Active OS tab paints the icon in `var(--ink)` so it
reads as a quietly emphasized monochrome mark — the OS picker is
about platform, not brand color.
Shell syntax highlighting:
- New `highlightShell(src)` in `lib/highlight.ts` plus a
`<HighlightedShell>` component. Distinct from `highlightGeneric`
because the C-family `//` line-comment rule was eating URLs in
install commands (`https://gocciascript.dev/...` rendered as a
comment from the `//` onward — that's why install commands
"looked like JavaScript"). The shell tokenizer:
• `#`-comments only at line/whitespace boundaries — URLs with
`#` fragments aren't truncated.
• Strings (single/double/backtick) with backslash escapes.
• `$VAR` and `${VAR}` variable expansions.
• Long/short flags (`--foo`, `-fsSL`) at token boundaries.
• Bash control-flow keywords (`if`, `then`, `for`, …).
- Routed through both `markdown.tsx` (for `\`\`\`shell|sh|bash|zsh`
fences) and the install page's `CopyableCommand` when
`language === "shell"`.
Verification: lint + tsc + build clean, 60 tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… hydration
Hydration error (1 + 2 in the user's list):
- `release.publishedAt` was rendered with `toLocaleDateString(undefined,
…)`, which picks up the SSR runtime's locale (en-US, often UTC) on
the server vs. the browser's locale on the client. The mismatched
text node tripped React 19's hydration check. Pinned the locale to
`"en-US"` and `timeZone: "UTC"` so SSR and the browser produce
identical output. Same fix removes the secondary "script tag while
rendering" warning since both originate from the same hydration
desync.
Banner spacing (3):
- Tightened `.install-hero-meta` gap (1rem → 0.5rem) and dropped the
legacy `.install-meta { margin: 0.5rem 0 0 }` and `.prestable-banner
{ margin: 0.5rem 0 2rem }` so the two cards sit close together
inside the hero's right column rather than floating apart.
Quick install reuse (4):
- Extracted `<QuickInstall>` into `components/quick-install.tsx`.
- Extracted shared `<CommandTabs>`, `OS_TABS`, `PM_TABS`, and
`detectOs` into `components/command-tabs.tsx` so both the
homepage and the install page import from one source.
- Replaced the homepage's inline `CurlInstall` with `<QuickInstall>`,
followed by a small meta row holding the latest-release pill and
the "Other install options →" link to `/install`.
OS-tabbed long install methods (5 + 7 + 8):
- Consolidated Homebrew / apt / Scoop into a single "Native package
manager" method with OS tabs.
- "Pre-built binaries" now OS-tabs and downloads the **full
toolchain** (`GocciaScriptLoader` + `GocciaTestRunner` +
`GocciaREPL`). macOS/Linux use a `for` loop and `chmod +x` →
`/usr/local/bin/`; Windows uses a PowerShell `foreach` over
`Invoke-WebRequest` writing to `$env:USERPROFILE\bin`.
- "Build from source" OS-tabs the final command (Unix
`./build/GocciaScriptLoader --help` vs. Windows
`.\\build\\GocciaScriptLoader.exe --help`); the build line itself
(`./build.pas loader testrunner repl`) is identical.
- All four OS-tabbed methods share the `goccia.install.os` storage
key so picking macOS in one updates the others — same as the PM
picker's `goccia.install.pm` key.
yarn + deno (6):
- Added `YarnIcon` (yarn ball-of-yarn) and `DenoIcon` (Deno dinosaur
silhouette) using simple-icons paths (CC0).
- Extended `PM_TABS` to npm / Bun / pnpm / Yarn / Deno.
- Run-without-installing: yarn → `yarn dlx gocciascript script.js`,
deno → `deno run -A npm:gocciascript script.js`.
- JS package managers: yarn → `yarn global add gocciascript`,
deno → `deno install -gA -n gocciascript npm:gocciascript`.
`$PATH` rendering + FreePascal link (9 + 10):
- Every `$PATH` mention is now `<code>$PATH</code>` (the install hero
intro already had this; "Build from source" now does too).
- "FreePascal compiler" in the Build-from-source description is now
a link to https://www.freepascal.org with `target="_blank" rel="noopener noreferrer"`.
Verification: lint + tsc + build clean, 60 tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replace the en-US-pinned date with the user's actual locale, resolved
server-side from the `Accept-Language` request header. SSR and the
browser still render identical text (no hydration mismatch) because
the same string locale is passed down as a prop and used on both
sides — but a German visitor now sees "25. April 2026" and a French
visitor sees "25 avril 2026" instead of always seeing "April 25,
2026".
- New `lib/locale.ts` exports `parseAcceptLanguage(header)`. Splits
the header on `,`, parses each entry's `;q=` quality factor
(default 1.0), drops the `*` wildcard, sorts by descending q with
stable ordering, and validates each candidate via
`Intl.DateTimeFormat(...).resolvedOptions().locale` so we both
reject invalid tags AND return the canonical form (`en-us` →
`en-US`). Falls back to `DEFAULT_LOCALE = "en-US"` when nothing
parses.
- `app/install/page.tsx` reads `headers()` alongside the GitHub
fetch, parses the locale, passes it to `<Install locale={…}
release={…} />`. Calling `headers()` opts this route into
per-request rendering — the build output confirms `ƒ /install`.
The GitHub fetch keeps its `next: { revalidate: 1800 }` data-layer
cache so we don't hit the API per visitor.
- `components/install.tsx` accepts the `locale` prop and uses it in
`toLocaleDateString(locale, { ..., timeZone: "UTC" })`. Timezone
stays UTC so the calendar date is stable across visitors regardless
of their TZ.
11 new test cases in `__tests__/locale.test.ts` cover header parsing:
empty / undefined fallback, single-tag pass-through, case
canonicalization, q-factor sorting (incl. ties → header order),
real-world Chrome and multi-locale headers, invalid tags falling
through to the next candidate.
Verification: 71 tests pass, lint + tsc + build clean. /install
correctly transitions from static (○) to dynamic (ƒ) in the build
manifest.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1. **Landing-page Latest version block** — extracted `<LatestVersion>`
into `components/latest-version.tsx` so the homepage hero and the
/install page share one renderer. The homepage previously showed
only a small `v0.6.1` pill; it now renders the full
"Latest version v0.6.1 released April 25, 2026" line, locale-aware
(page reads `Accept-Language` and threads the BCP-47 tag down).
`app/page.tsx` is now `ƒ` (per-request) like `/install`.
2. **"runtime built for code you didn't write yourself."** →
**"portable runtime under 5MB."** in the Design Principles section.
3. **Animated Result step** in the Compiler Pipeline:
- Console output renders character-by-character via a typewriter
effect (`useEffect` keyed on `[active, resultView]` ticks every
35 ms with a 220 ms initial delay).
- A solid accent caret pulses while characters are still streaming;
once the line completes, it switches to the standard blinking-
terminal cadence.
- JSON view stays fully rendered (partial JSON would just look
broken mid-type) but gets the existing `arch-detail-anim` slide-
in via a stable `key="json"`.
- New `.arch-caret` / `.arch-caret-blink` CSS with `@keyframes
arch-caret-pulse` (steady pulse during type) and `arch-caret-blink`
(50% steps for the post-type blink).
4. **Arch-aware prebuilt binaries** for macOS / Linux:
- New `detectArch(ua, os?)` in `command-tabs.tsx` reads `aarch64` /
`arm64` / `x86_64` / `wow64` markers from the UA string. macOS
defaults to `arm64` (Apple Silicon hides the arch in WebKit's UA,
so most-likely-default wins for new hardware). Falls back to
`x86_64` for everything else.
- `install.tsx` builds the prebuilt command at render time with
`unixPrebuiltBlock(os, arch, commented)` — the detected arch is
uncommented and labeled "auto-detected"; the alternate is fully
commented and labeled "uncomment to use instead". User flips by
toggling comments. Windows stays single-arch (only x86_64 is
shipped today).
- SSR renders `arm64` as the static default; the client useEffect
overrides if needed. Both archs always appear in the command, so
the SSR/client transition is just a comment-marker shift, not a
layout reflow.
Verification: 71 tests pass, lint + tsc + build clean. Both `/` and
`/install` now show as `ƒ` (dynamic per-request) in the build manifest.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Release artifact convention now matches what the build pipeline actually
publishes:
gocciascript-{version}-macos-{arm64|x64}.zip
gocciascript-{version}-linux-{arm64|x64}.tar.gz
gocciascript-{version}-windows-{x64|x86}.zip
with the toolchain laid out under `build/` inside each archive.
Install page:
- `ArchKey` widened to `"arm64" | "x64" | "x86"` to cover Windows
32-bit and the new x64 (was x86_64) suffix. `detectArch` updated
to map UA hints to those keys.
- Pre-built binaries section rebuilt on top of the archive flow:
download the archive, extract, `chmod +x` the three executables,
move them out of `build/` onto $PATH. macOS/Linux render both
arch blocks (auto-detected vs. commented-out alternate); Windows
renders both x64 and x86. The release tag from `release.tagName`
is threaded into the command, with `stripV()` removing the leading
`v` for the artifact filename.
`website/public/install.sh` (the curl-pipe-sh target):
- POSIX shell, detects OS via `uname -s` and arch via `uname -m`.
Resolves the latest tag from the GitHub releases API. Honors
`INSTALL_DIR`, `GOCCIA_VERSION`, `GOCCIA_REPO` env overrides.
Uses sudo only when the install dir isn't writable. Cleans up
the tmpdir on EXIT/INT/TERM.
`website/public/install.ps1` (the irm-pipe-iex target):
- PowerShell, detects 64-bit vs 32-bit Windows. Same env-var
overrides. Adds the install dir to the user's PATH if not present.
`packaging/homebrew/gocciascript.rb`:
- Standard formula with on_macos / on_linux blocks branching on
`Hardware::CPU.arm?`. Targets the archive URLs above. SHA256
placeholders flagged for the release pipeline.
`packaging/apt/`:
- `README.md` documents the repo layout (flat-layout, signed-by
keyring, dists/stable/main/binary-{amd64,arm64}/Packages.gz) and
the publishing flow.
- `gocciascript.list` — the source-list snippet users drop into
`/etc/apt/sources.list.d/`.
- `build-deb.sh` — packages the released linux archive into a
`gocciascript_<version>_<arch>.deb` ready for the pool.
Verification: 71 tests pass, lint + tsc + build clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Homebrew (`packaging/homebrew/README.md`): - Confirm via the Homebrew docs that `brew tap <user>/<name>` strictly requires the destination repo to be named `homebrew-<name>` — so a one-step `brew install frostney/tap/gocciascript` cannot point at `frostney/GocciaScript` directly. - Recommend the **mirror** pattern: source-of-truth lives at `packaging/homebrew/gocciascript.rb` here; release CI pushes the rendered formula into a tiny separate `frostney/homebrew-tap` on every tag push. Only one file to edit; tap repo is auto-generated. - Document the alternatives (two-arg tap, direct-URL install, homebrew-core submission) and why each is worse for now. Bun / Deno / esbuild references included as comparison points. - Sketch the per-release sync: bump version, compute SHA-256 for each of the four archives, push to the tap repo, mirror-tag. APT (`packaging/apt/README.md`): - Restructure into "One-time setup" (GPG key + host) vs "Per-release publishing" (build .debs, regenerate index per arch, sign Release/InRelease, upload). - Add the per-arch loop for `apt-ftparchive packages` / `gzip -kf Packages`, plus the explicit `apt-ftparchive release` + `gpg -abs Release.gpg` + `gpg --clearsign InRelease` flow modern apt expects. - Note that 3–7 belong in a tag-triggered GitHub Actions workflow with `APT_SIGNING_KEY` as a secret, and the same job can mirror the Homebrew formula at the same time — single workflow, both publishing surfaces. - Acknowledge the current gap: the host at `gocciascript.dev/repo` doesn't exist yet, so install instructions referenced from `/install` reach a 404 today. The docs describe the intended target. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1. **Hide three install sections** behind one flag in `install.tsx`:
`SHOW_PACKAGE_MANAGER_SECTIONS = false` gates "Native package
manager", "Run without installing", and "JavaScript package
managers". Data + JSX are preserved unchanged — bringing them back
is a single boolean flip when the runtime is mature enough.
`SYSTEM_PM_COMMANDS` strips the brew/apt entries down to a "coming
soon" placeholder; the Scoop entry survives but is hidden until
the section reappears.
2. **`/install` ↔ `/installation` swap**:
- `app/install/page.tsx` → `app/installation/page.tsx`. The full
install page (Quick install featured method + Pre-built binaries
+ Build from source) now lives at `/installation`.
- New `app/install/route.ts` is a Node Route Handler that serves
`public/install.sh` as `text/plain`. Quick install one-liner
becomes `curl -fsSL https://gocciascript.dev/install | sh` —
no extension, easier to remember, and visiting the URL in a
browser just shows the script for auditing.
- `public/install.sh` stays in place, so `/install.sh` keeps
working as the legacy URL — backwards compat for anyone who
already pinned the old URL.
- Updated all internal links (`site-shell` nav, sitemap, hero CTA,
metadata canonicals) from `/install` → `/installation`.
3. **"View install script" link** next to the Quick install:
- Homepage hero install meta gets a `View install script` link
that opens `/install` in a new tab — users can audit the
curl-pipe-sh target before piping it.
- The featured "Quick install" method on `/installation` gets a
muted aux line: "View install script before piping it into your
shell." with the same target.
- New `.install-method p.install-method-aux` CSS rule (compound
selector, no `!important`) tightens spacing and dims the line
so it doesn't compete with the method's primary description.
Verification: lint + tsc + build clean. Build manifest shows
`ƒ /install` (Route Handler — script) and `ƒ /installation` (page)
side-by-side. `/install.sh` continues to work as a static file.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…tion
1. **Tool-call comparison — strip kB context and latency.** Cost
block reduced to call count + total token count
(`5 tool calls · 126 tok`). The kB-of-context and ms-of-latency
numbers were guesses; tokens are the metric that actually shows up
on the bill.
2. **"Show tool-call payload" switch.** Toggle above the comparison
that swaps each step's display from the bare command/code text
to the canonical OpenAI tool-call envelope:
{"name": "bash", "arguments": "{\"command\": \"ls /tmp/...\"}" }
That envelope is what the model actually emits and pays tokens
for. Per-step token counts (21/24/25/27/29 for the bash flow,
118 for the single goccia call) appear next to each step's
role label and were verified once via `gpt-tokenizer`'s
`cl100k_base` encoder (GPT-4 / GPT-4o); the totals are summed at
render time. The library itself is not a dependency — 53MB is
absurd to ship for static numbers — values are hardcoded with a
doc comment explaining how they were calculated.
3. **Sandbox preview → real `GocciaScriptLoader`.** Was a static
regex-based simulation; now POSTs to `/api/run` like the
playground. Globals from the JSON textarea become a `const`
prelude (`const users = [...]`) prepended to the user's code,
so the script sees the injected values without any API change.
Default script ends with an expression statement (the runner
doesn't accept top-level `return`); console output and the
structured value both flow back into the `host result` panel.
Errors from invalid globals JSON short-circuit before the
round-trip.
4. **Integration section hidden** behind `SHOW_INTEGRATION = false`
alongside the existing pattern in `/installation`. The Vercel-AI
snippet stays in source so unhiding it later is a single flag
flip — won't surface until the embedding API stabilises and
`@goccia/runtime` is published.
CSS: new `.tcc-toggle` row + `.tcc-step-tokens` per-step badge that
right-aligns inside `tcc-step-head`. `.tcc-step-head` gains
`flex-wrap: wrap` so the token badge can drop to its own line on
narrow viewports.
Verification: 71 tests pass, lint + tsc + build clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1. **Line numbers across homepage code blocks.** New
`components/numbered-code.tsx` ships a generic gutter-prefixed
`<NumberedCode lineCount={…}>` shell. Applied to the Compiler
Pipeline source panel and the JSON-result panel. The hero
runnable card grows a `.hero-gutter` mirroring the playground's
pattern: a positioned gutter pre, with the highlighter overlay and
editable textarea both indented past it via a shared
`--hero-gutter-width` custom property so the line numbers stay
aligned no matter what the user types.
2. **Sandbox host result trimmed to ~3 lines.** Default script swaps
the multi-line return-value pretty-print for three short
`console.log`s; the globals dataset shrinks to 3 users so the
visible output stays compact (`active count: …`, `names: …`,
`first id: …`). Fits comfortably without dominating the page.
3. **Shared console-output animation.** New
`components/animated-output.tsx` ships an `<AnimatedOutput>` that
reveals lines line-by-line via CSS `animation-delay` stagger plus
a blinking accent caret on the trailing line. Drop-in replacement
for the bespoke per-line rendering used in:
- the Compiler Pipeline result step (was a per-character
typewriter; now line-stagger so it stays consistent with
longer outputs);
- the playground output panel;
- the sandbox preview's host-result panel;
- the hero runnable card.
Each caller bumps a `runKey` per execution so the wrapper
remounts and the animation cleanly replays. New
`.anim-output-line` / `.anim-output-caret` CSS lives next to
`.pg-output` and reuses the existing `arch-caret-blink` keyframes.
4. **Tool-call payload toggle now shows the full LLM request.** Was
the bare `{name, arguments}` envelope per step; now it's the full
request body the model sees on that turn — system prompt, user
task, tool definitions, plus all prior assistant tool calls + tool
results. New `buildLlmRequest(flowKey, index)` walks the flow's
step list and produces the OpenAI-shape `{model, messages, tools}`
object, which the toggle then renders as JSON.
Per-step token counts (and the per-flow total) come from
`LLM_CALL_TOKENS` — verified once with `gpt-tokenizer`'s
`cl100k_base` encoder via `scripts/compute-llm-call-tokens.mjs`
and pasted back as constants. New numbers:
bash: [119, 183, 336, 404, 474] → 1516 tok total
goccia: [170] → 170 tok total
The accumulating prior-context cost in the bash flow (each step
pays for the conversation history up to that point) shows up
naturally — by step 5 the model is paying ~4× the first-call
cost just to receive the same tool definition + system prompt
plus four prior tool calls and results. The `gpt-tokenizer`
library is intentionally not a dependency (~53 MB unpacked); the
helper script is callable on demand for re-derivation.
Verification: 71 tests pass, lint + tsc + build clean. /, /install,
/installation, /sandbox all render server-side per request as before.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1. **Tool-call comparison: tokens in + tokens out.** Was a single
"input tokens" total per step; now each step (and each flow's
header) shows both sides of the bill: `↓ in` for the request
body the model sees and `↑ out` for the assistant message it
emits. The output side is tinted with the accent colour because
most APIs charge it at 2-3× the input rate.
Re-ran `scripts/compute-llm-call-tokens.mjs` (now reports both
sides) and pasted the new `LLM_CALL_TOKENS` shape:
bash: in=[119,183,336,404,474] out=[45,48,49,51,53]
goccia: in=[170] out=[142]
Totals: bash 1516+246=1762, goccia 170+142=312 → ~5.6× cost
ratio when both sides are counted, vs. the ~9× ratio when only
counting input tokens — more honest framing.
2. **Nightly install scripts.** Two new files:
website/public/install-nightly.sh
website/public/install-nightly.ps1
plus an `app/install-nightly/route.ts` Route Handler so the
bare `/install-nightly` URL returns the .sh content as
text/plain (mirrors the `/install` ↔ `/install.sh` pair we
already have for stable). Both scripts pin `TAG=nightly` and
pull `gocciascript-nightly-{os}-{arch}.{ext}` from the rolling
`nightly` GitHub release. Build manifest now lists
`ƒ /install-nightly` alongside `/install`.
Surfaced from the install page's Quick install method via a
one-line note: "Want unreleased fixes? Swap `/install` for
`/install-nightly` to track the rolling nightly channel."
3. **PostHog page-view analytics.** New `components/analytics.tsx`
exports an `<Analytics />` client component that initialises
`posthog-js` from `NEXT_PUBLIC_POSTHOG_KEY` (with optional
`NEXT_PUBLIC_POSTHOG_HOST`) on mount and captures `$pageview`
events on every `usePathname()` / `useSearchParams()` change.
No-ops cleanly if the env vars are unset — local dev and forks
without keys aren't affected. `autocapture: false` and
`person_profiles: "identified_only"` keep the default privacy
posture conservative. Wrapped in `<Suspense>` because the
component reads `useSearchParams`, which Next requires to live
inside a Suspense boundary.
4. **Vercel Analytics + Speed Insights.** `@vercel/analytics` and
`@vercel/speed-insights` mounted as `<VercelAnalytics />` and
`<SpeedInsights />` at the bottom of the root layout. They
handle their own internal Suspense boundaries. Both are no-ops
off Vercel.
Verification: 71 tests pass, lint + tsc + build clean. Build
manifest shows `/install-nightly` as a per-request route.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1. **Nightly install is now a build-time vendor fetch.** The
user-facing `/install-nightly` route + `install-nightly.{sh,ps1}`
scripts are gone. In their place: `scripts/fetch-nightly-binary.mjs`
downloads the rolling-nightly archive matching the build host's
OS / arch and drops `GocciaScriptLoader` / `GocciaTestRunner` /
`GocciaREPL` into `website/vendor/`. Wired into `prebuild` (NOT
`predev`) so `next build` populates the vendor dir but `next dev`
leaves it alone — local dev keeps using `../build/<bin>` from
`./build.pas loader`.
Honoured envs: `GOCCIA_REPO`, `GOCCIA_NIGHTLY_TAG`,
`SKIP_VENDOR_FETCH=1` (offline-friendly bypass). Fetch failure
doesn't fail the build when a previously-vendored binary already
exists at `vendor/GocciaScriptLoader`.
Reordered candidate paths in `app/api/run/route.ts` so the
locally built `../build/GocciaScriptLoader` wins over the vendored
one when both exist — avoids stale-vendor-shadows-fresh-local-work
surprises during dev. Added `/vendor/` to `.gitignore`. The
existing `next.config.ts` `outputFileTracingIncludes` rule
already pulls `./vendor/**` into the `/api/run` deploy bundle, so
Vercel ships with the binary baked in.
2. **Observability consolidated under PostHog.** Dropped
`@vercel/analytics` and `@vercel/speed-insights` — PostHog's
single client now covers all three responsibilities:
- Page views — already on `usePathname()` / `useSearchParams()`.
- **Web Vitals** (replaces Speed Insights) — enabled via
`capture_performance: { web_vitals: true }`. PostHog reports
LCP / FCP / CLS / INP / TTFB as `$web_vitals` events surfaced
in the Web Analytics dashboard.
- **Exception tracking** (no separate Sentry needed) — enabled
via `capture_exceptions: true`. PostHog hooks
`window.onerror` and `unhandledrejection`; uncaught errors
show up in PostHog Error Tracking automatically.
Privacy posture stays conservative: `autocapture: false`,
`person_profiles: "identified_only"`, and the whole client
no-ops cleanly when `NEXT_PUBLIC_POSTHOG_KEY` is unset (local
dev / forks).
Layout slimmed back down — only `<Analytics />` (PostHog)
in `<Suspense>`; the two `@vercel/*` components removed.
Verification: 71 tests pass, lint + tsc + build clean (with
`SKIP_VENDOR_FETCH=1` since `nightly` archives aren't published yet).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Wrap POST in a top-level try/catch that funnels unhandled errors to `captureServerException` and returns a structured 500 transport error instead of leaking stack traces to the client. - Add `lib/posthog-server.ts`: memoised `posthog-node` client with `flushAt: 1` so events ship before the serverless function exits, plus `captureServerException` / `captureServerEvent` helpers shaped for PostHog Error Tracking (`$exception_list` + `$exception_*`). - Emit operational events on every early-error path (`api_run_rate_limited`, `api_run_body_too_large`, `api_run_invalid_json`, `api_run_code_too_large`, `api_run_aborted`, `api_run_spawn_failed`) plus a single `api_run_completed` per finished run with timing, exit code, truncation, and runner-emitted error class — no source code or result values, to avoid PII in telemetry. - Reinstate `posthog-js` and `posthog-node` in `package.json` / `bun.lock` (they were dropped in an earlier package shuffle).
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (5)
website/public/install.sh (1)
42-44: Consider adding--fail-with-bodyfor better error handling.The JSON parsing via
grep | sedis a pragmatic choice for avoiding ajqdependency. However, if the GitHub API returns an error response (rate limit, etc.), the current approach silently fails. Adding--fail-with-body(if curl version supports it) or explicitly checking$?after curl would help surface errors.This is optional given the script already has a fallback check on line 44.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/public/install.sh` around lines 42 - 44, Update the TAG retrieval around the curl call that sets TAG (the command assigning TAG using curl | grep | sed) to surface HTTP/API errors: either append --fail-with-body to the curl invocation (if available) or capture curl's exit status and stderr/output and log/err when curl fails before parsing; ensure the script returns a non-zero exit or emits a clear error message when curl fails so the subsequent [ -n "$TAG" ] || err "could not resolve latest release" has full context.website/scripts/compute-llm-call-tokens.mjs (1)
24-86: Verify tool definitions stay in sync with sandbox.tsx.The tool definitions and steps here must mirror
TOOL_CALL_FLOWSinsandbox.tsx. The context snippets confirm they currently match, but consider adding a comment referencing the source file and line ranges to help future maintainers keep them synchronized.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/scripts/compute-llm-call-tokens.mjs` around lines 24 - 86, The BASH_TOOL, RUN_CODE_TOOL, and BASH_STEPS definitions must remain identical to TOOL_CALL_FLOWS in sandbox.tsx; update this file so those symbols explicitly mirror the source by adding a clear comment above their definitions referencing sandbox.tsx and the relevant TOOL_CALL_FLOWS block (include file name and approximate line range or function name) and note that any changes to TOOL_CALL_FLOWS must be duplicated here; ensure BASH_STEPS entries match the sequence and exact strings used in TOOL_CALL_FLOWS and RUN_CODE_TOOL/BASH_TOOL parameter shapes match the function definitions in sandbox.tsx.website/src/lib/highlight.ts (1)
75-81: Minor: Number literal regex allows underscore repetition.The regex
/[0-9._n]/allows patterns like1__2or1.2.3which aren't valid numeric literals. This is acceptable for a syntax highlighter (it's forgiving), but worth noting for future reference.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/lib/highlight.ts` around lines 75 - 81, The number-scanning loop in highlight.ts (the while that checks /[0-9._n]/ and then push("n", src.slice(i, j))) is too permissive (allows repeated underscores and multiple dots); update the loop to enforce numeric-literal rules by either (A) replacing the regex with a more precise test like /[0-9n]|[._](?![._])/ and adding a seenDot flag to prevent more than one '.' or (B) implement explicit state tracking: let seenDot = false; let prevUnderscore = false; advance j while characters are digits or 'n', allow '.' only if !seenDot (then set seenDot = true), allow '_' only if !prevUnderscore (then set prevUnderscore = true), reset prevUnderscore on digits; keep the existing push("n", src.slice(i, j)) and i = j logic intact.website/src/app/install/route.ts (1)
21-32: Consider adding error handling for missing file.If
public/install.shis missing (e.g., during local development without the file),readFilethrows an unhandledENOENTerror resulting in an opaque 500 response. A try-catch with a descriptive error would improve debuggability.🛡️ Optional: Add error handling
export async function GET() { const filePath = path.join(process.cwd(), "public", "install.sh"); - const body = await readFile(filePath, "utf8"); + let body: string; + try { + body = await readFile(filePath, "utf8"); + } catch (err) { + return new Response("install.sh not found", { status: 404 }); + } return new Response(body, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/app/install/route.ts` around lines 21 - 32, The GET route reads public/install.sh via readFile into body but lacks error handling; wrap the readFile call (and filePath resolution) in a try-catch inside the exported GET function, catch ENOENT and return a clear Response (e.g., 404 with a descriptive message) and for other errors return a 500 with the error message; ensure you still return the existing Response(body, { headers }) on success and reference the same filePath, readFile, and Response symbols so the behavior is unchanged except for graceful error responses.website/scripts/fetch-nightly-binary.mjs (1)
49-52: Windows ARM64 maps tox64— acceptable fallback.On ARM64 Windows,
process.archisarm64, but this falls through tox64. Since x64 binaries run via emulation on ARM64 Windows, this works but may be slower. Consider adding explicit ARM64 support if/when ARM64 Windows builds are published.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/scripts/fetch-nightly-binary.mjs` around lines 49 - 52, The Windows branch currently maps arch via p and a and falls back to x64 for unknown arches; add explicit ARM64 handling so when p === "win32" and a === "arm64" you return arch "arm64" (instead of implicitly mapping to "x64"), keeping the existing ia32 -> "x86" case and a final fallback to "x64"; update the return in that win32 branch (referencing the variables p and a and the existing error message `unsupported build host: ${p}/${a}`) to handle arm64 explicitly so future ARM64 Windows builds are supported without relying on emulation fallback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packaging/apt/build-deb.sh`:
- Around line 35-43: The cp commands assume binaries are in "${WORK}/build" but
the tarball expands into a top-level directory (e.g.
"${WORK}/gocciascript-.../build"), so update the script around the PKG/WORK
handling to detect the extracted top-level folder (e.g. determine EXTRACT_DIR by
listing "${WORK}" after extraction or using tar --strip-components) and then
copy binaries from "${EXTRACT_DIR}/build/GocciaScriptLoader",
"${EXTRACT_DIR}/build/GocciaTestRunner", and "${EXTRACT_DIR}/build/GocciaREPL"
into "${PKG}/usr/bin/"; ensure any temporary variables (EXTRACT_DIR) are used in
place of the hardcoded "${WORK}/build" in the cp commands so copying succeeds
regardless of the tarball's top-level folder name.
In `@packaging/apt/README.md`:
- Around line 74-77: The loop currently runs apt-ftparchive packages pool/main
for both architectures which produces identical, unfiltered indexes; update the
loop to ensure the target directory exists (mkdir -p
dists/stable/main/binary-$arch) and invoke apt-ftparchive with an architecture
filter so each Packages file contains only $arch-specific entries (e.g. use
apt-ftparchive packages --architecture $arch pool/main or the equivalent
apt-ftparchive option) before gzipping with gzip -kf
dists/stable/main/binary-$arch/Packages.
In `@packaging/homebrew/gocciascript.rb`:
- Around line 27-40: The formula currently contains placeholder sha256 strings
("REPLACE_WITH_MACOS_ARM64_SHA256", "REPLACE_WITH_MACOS_X64_SHA256",
"REPLACE_WITH_LINUX_ARM64_SHA256", "REPLACE_WITH_LINUX_X64_SHA256"); replace
each placeholder with the actual SHA-256 checksum of the corresponding release
artifact (macos arm64, macos x64, linux arm64, linux x64) by downloading the
matching archives from the GitHub release (or obtaining the checksums published
with the release) and computing/confirming sha256 (e.g., shasum -a 256), then
update the four sha256 string values in the formula so they exactly match the
computed checksums.
In `@packaging/homebrew/README.md`:
- Around line 22-46: The README's "release workflow" steps (bumping version in
gocciascript.rb, computing SHA-256 for four archives, committing/tagging the
tap) are out of sync with the actual release job in .github/workflows/ci.yml
which only uploads release artifacts and changelog updates; either implement the
tap-sync steps in CI (add a release workflow step that updates
packaging/homebrew/gocciascript.rb, computes shasums for the four
{macos,linux}×{arm64,x64} artifacts, commits and tags the tap repo using a
deploy key or fine‑grained PAT) or update the README section to mark these steps
as manual/planned and describe the exact current CI behavior (reference the
"release" job in the workflow and the formula file gocciascript.rb) so operators
are not misled.
In `@website/scripts/fetch-nightly-binary.mjs`:
- Around line 103-106: The move using fs.rename(src, dest) can fail with EXDEV
across filesystems (e.g., /tmp on different mount); update the logic around
rename in the block that computes dest and increments copied to instead perform
a reliable cross-device move: use fs.promises.copyFile(src, dest) (or
fs.copyFile) to copy the file, then remove the original with rm(src, { force:
true }) (or fs.promises.unlink) and only then increment copied; ensure required
fs promise API is imported/available where rename was used and remove the direct
rename call.
In `@website/src/components/analytics.tsx`:
- Around line 33-54: In the useEffect block inside the analytics component,
replace the internal check posthog.__loaded with the public property
posthog.loaded to guard against re-initialization; update the conditional that
currently reads `if (posthog.__loaded) return;` to use `posthog.loaded` so the
component uses the documented PostHog API before calling
posthog.init(POSTHOG_KEY, {...}).
In `@website/src/components/landing.tsx`:
- Around line 86-95: The run() function can be re-entered via keyboard shortcuts
causing overlapping requests; add a re-entry guard at the top of run that
immediately returns if an "in-progress" flag is set (don't rely solely on the
button's disabled prop). Implement this by introducing a ref (e.g., runningRef)
that mirrors setRunning: set runningRef.current = true before starting the fetch
and set it false in finally; at the start of run() do if (runningRef.current)
return; and update the same guard in the other place that calls run (the
textarea shortcut/keyboard handler) so both paths check runningRef before
invoking run().
In `@website/src/components/sandbox.tsx`:
- Around line 479-483: The request body uses an unsupported mode value
"tree-walk" when calling fetch("/api/run"); replace that literal with the
route's declared mode name (use "interpreted" or "bytecode" as expected by the
handler) so the body becomes JSON.stringify({ code: fullCode, mode:
"<interpreted|bytecode>", asi: true }). If the UI exposes a mode selector, map
any UI label "tree-walk" to "interpreted" before sending; update the code path
around fullCode and the fetch call to send the validated mode string the API
expects.
- Around line 424-430: The buildScript function injects parsedGlobals by
interpolating keys directly into const declarations (see buildScript,
parsedGlobals, prelude, userCode), which breaks when keys are not valid JS
identifiers; fix it by validating/sanitizing each key before emitting a const
(allow only /^[A-Za-z_$][A-Za-z0-9_$]*$/) and skip or remap invalid keys, or
instead inject globals safely without identifier interpolation (for example
assign to globalThis or a single injected object using only bracketed/property
access with JSON.stringify values); ensure the remediation preserves value
serialization (use the existing JSON.stringify for values) and update prelude
generation to only include safe declarations or safe assignments for each
parsedGlobals entry.
In `@website/src/components/site-shell.tsx`:
- Around line 196-204: The nav links are only exposing the active state via
data-active, so update the Link rendering in the TABS.map loop (the element
created in site-shell.tsx using Link and the active variable) to also set
aria-current="page" when active === t.id (otherwise omit the attribute or set to
undefined) so assistive tech can announce the current page.
- Around line 190-193: The data-ready attribute never updates because
indicatorReadyRef.current is mutated without re-rendering; change to React state
(e.g. useState hook: indicatorReady / setIndicatorReady) and in the
onTransitionEnd handler call setIndicatorReady(true), then render the span with
data-ready={indicatorReady} (references: indicatorReadyRef, onTransitionEnd,
data-ready) so the DOM attribute updates and CSS can observe the change.
---
Nitpick comments:
In `@website/public/install.sh`:
- Around line 42-44: Update the TAG retrieval around the curl call that sets TAG
(the command assigning TAG using curl | grep | sed) to surface HTTP/API errors:
either append --fail-with-body to the curl invocation (if available) or capture
curl's exit status and stderr/output and log/err when curl fails before parsing;
ensure the script returns a non-zero exit or emits a clear error message when
curl fails so the subsequent [ -n "$TAG" ] || err "could not resolve latest
release" has full context.
In `@website/scripts/compute-llm-call-tokens.mjs`:
- Around line 24-86: The BASH_TOOL, RUN_CODE_TOOL, and BASH_STEPS definitions
must remain identical to TOOL_CALL_FLOWS in sandbox.tsx; update this file so
those symbols explicitly mirror the source by adding a clear comment above their
definitions referencing sandbox.tsx and the relevant TOOL_CALL_FLOWS block
(include file name and approximate line range or function name) and note that
any changes to TOOL_CALL_FLOWS must be duplicated here; ensure BASH_STEPS
entries match the sequence and exact strings used in TOOL_CALL_FLOWS and
RUN_CODE_TOOL/BASH_TOOL parameter shapes match the function definitions in
sandbox.tsx.
In `@website/scripts/fetch-nightly-binary.mjs`:
- Around line 49-52: The Windows branch currently maps arch via p and a and
falls back to x64 for unknown arches; add explicit ARM64 handling so when p ===
"win32" and a === "arm64" you return arch "arm64" (instead of implicitly mapping
to "x64"), keeping the existing ia32 -> "x86" case and a final fallback to
"x64"; update the return in that win32 branch (referencing the variables p and a
and the existing error message `unsupported build host: ${p}/${a}`) to handle
arm64 explicitly so future ARM64 Windows builds are supported without relying on
emulation fallback.
In `@website/src/app/install/route.ts`:
- Around line 21-32: The GET route reads public/install.sh via readFile into
body but lacks error handling; wrap the readFile call (and filePath resolution)
in a try-catch inside the exported GET function, catch ENOENT and return a clear
Response (e.g., 404 with a descriptive message) and for other errors return a
500 with the error message; ensure you still return the existing Response(body,
{ headers }) on success and reference the same filePath, readFile, and Response
symbols so the behavior is unchanged except for graceful error responses.
In `@website/src/lib/highlight.ts`:
- Around line 75-81: The number-scanning loop in highlight.ts (the while that
checks /[0-9._n]/ and then push("n", src.slice(i, j))) is too permissive (allows
repeated underscores and multiple dots); update the loop to enforce
numeric-literal rules by either (A) replacing the regex with a more precise test
like /[0-9n]|[._](?![._])/ and adding a seenDot flag to prevent more than one
'.' or (B) implement explicit state tracking: let seenDot = false; let
prevUnderscore = false; advance j while characters are digits or 'n', allow '.'
only if !seenDot (then set seenDot = true), allow '_' only if !prevUnderscore
(then set prevUnderscore = true), reset prevUnderscore on digits; keep the
existing push("n", src.slice(i, j)) and i = j logic intact.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8584f516-299b-4355-b2b9-09134040c34e
⛔ Files ignored due to path filters (1)
website/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (36)
packaging/apt/README.mdpackaging/apt/build-deb.shpackaging/apt/gocciascript.listpackaging/homebrew/README.mdpackaging/homebrew/gocciascript.rbwebsite/.gitignorewebsite/package.jsonwebsite/public/install.ps1website/public/install.shwebsite/scripts/compute-llm-call-tokens.mjswebsite/scripts/fetch-nightly-binary.mjswebsite/src/__tests__/locale.test.tswebsite/src/app/api/run/route.tswebsite/src/app/globals.csswebsite/src/app/install/route.tswebsite/src/app/installation/page.tsxwebsite/src/app/layout.tsxwebsite/src/app/page.tsxwebsite/src/app/sitemap.tswebsite/src/components/analytics.tsxwebsite/src/components/animated-output.tsxwebsite/src/components/command-tabs.tsxwebsite/src/components/highlighted-code.tsxwebsite/src/components/icons.tsxwebsite/src/components/install.tsxwebsite/src/components/landing.tsxwebsite/src/components/latest-version.tsxwebsite/src/components/markdown.tsxwebsite/src/components/numbered-code.tsxwebsite/src/components/playground.tsxwebsite/src/components/quick-install.tsxwebsite/src/components/sandbox.tsxwebsite/src/components/site-shell.tsxwebsite/src/lib/highlight.tswebsite/src/lib/locale.tswebsite/src/lib/posthog-server.ts
✅ Files skipped from review due to trivial changes (4)
- packaging/apt/gocciascript.list
- website/package.json
- website/src/components/markdown.tsx
- website/.gitignore
Archive-path family of bugs (release tarballs ship binaries directly under
`gocciascript-<ver>-<os>-<arch>/`, not under a `build/` subdir — caught
by CodeRabbit on `build-deb.sh`, then fixed everywhere with the same
underlying mistake):
- packaging/apt/build-deb.sh: copy from extracted root, fail loudly with
a directory listing if the expected path is missing.
- packaging/homebrew/gocciascript.rb: drop the bogus `build/` prefix from
`bin.install` and document the actual archive layout.
- website/scripts/fetch-nightly-binary.mjs: same path fix; also switch
`rename` -> `copyFile` so cross-filesystem moves on CI workers don't
fail with EXDEV.
Sandbox + landing:
- sandbox.tsx: validate globals keys against an identifier regex before
emitting `const ${k} = ...;` -- keys like "user-id" or "123" would
otherwise hard-fail the runner's parser with a SyntaxError surfaced
far from the actual cause. Also send `mode: "interpreted"` instead of
the unsupported `"tree-walk"` literal.
- landing.tsx: guard `run()` against re-entry -- the Cmd/Ctrl+Enter
shortcut bypasses the disabled button, and rapid double-presses
could race two `/api/run` requests against the output panel.
Accessibility + correctness on site-shell:
- Convert `indicatorReadyRef` to state -- mutating a ref doesn't
re-render, so `data-ready` never reached the DOM.
- Add `aria-current="page"` on the active nav link so assistive tech
announces the current page (`data-active` is style-only).
Boot-time hardening:
- layout.tsx: parse `NEXT_PUBLIC_SITE_URL` defensively. A typo'd env
var would otherwise throw at module init and crash the entire app.
Stylesheet:
- globals.css: replace deprecated `word-break: break-word` with
`overflow-wrap: break-word`.
Documentation honesty:
- packaging/homebrew/README.md: mark the per-release sync flow as a
target/runbook rather than implemented CI; flag the formula's
placeholder sha256s explicitly so nobody assumes the tap is shippable.
- packaging/homebrew/gocciascript.rb: WIP banner in the header
acknowledging the same.
- packaging/apt/README.md: filter `apt-ftparchive packages` by `--arch`
so amd64 and arm64 indexes aren't identical mixed sets.
Vercel deploy:
- New `vercel.json` pinning build/install commands and bumping the
`/api/run` Function's `maxDuration` and `memory` to fit the spawned
loader's watchdog window plus output buffering.
Pushback (no change):
- analytics.tsx still uses `posthog.__loaded` -- verified against the
posthog-js d.ts (only `__loaded` is on the public class type) and
the bundled runtime (`this.__loaded=!...`). The plain `loaded`
property CodeRabbit cited doesn't exist on the current SDK.
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (1)
website/src/components/landing.tsx (1)
86-92:⚠️ Potential issue | 🟠 MajorUse a synchronous in-flight guard, not
runningstate alone.Line 91 still relies on React state becoming visible after the next render. Two fast invocations can still pass this check and launch overlapping
/api/runrequests. Keep the guard in a ref and gate the textarea shortcut on that same ref.Suggested fix
function HeroRunnableCard({ code }: { code: string }) { const [copyTick, setCopyTick] = useState(0); const [running, setRunning] = useState(false); const [output, setOutput] = useState<OutputLine[] | null>(null); const [src, setSrc] = useState(code); + const runningRef = useRef(false); const taRef = useRef<HTMLTextAreaElement>(null); const hlRef = useRef<HTMLPreElement>(null); const run = async () => { - if (running) return; + if (runningRef.current) return; + runningRef.current = true; setRunning(true); const banner = "› GocciaScriptLoader coffee-shop.js"; setOutput([{ kind: "meta", text: banner }]); try { const res = await fetch("/api/run", { @@ } catch (err) { setOutput([ { kind: "meta", text: banner }, { kind: "err", text: `network error: ${(err as Error).message}` }, ]); } finally { + runningRef.current = false; setRunning(false); } }; @@ onKeyDown={(e) => { - if (e.key === "Enter" && (e.metaKey || e.ctrlKey)) { + if ( + !runningRef.current && + e.key === "Enter" && + (e.metaKey || e.ctrlKey) + ) { e.preventDefault(); run(); } }}In React 19, are `useState` updates guaranteed to be visible to a second event-handler invocation before the next render, or is a `useRef` flag the recommended way to implement a synchronous re-entry guard?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/landing.tsx` around lines 86 - 92, The run function's re-entry guard uses React state (running) which is async — create a synchronous ref (e.g., inFlightRef = useRef(false)) and use inFlightRef.current as the immediate guard inside run (check it at function start, set it true before starting the async request and reset it to false in a finally block), keep setRunning(running) for UI only, and also gate the textarea shortcut handler on the same inFlightRef.current so both the button and the ⌘/Ctrl+Enter shortcut use the identical synchronous in-flight guard; update references to running/setRunning only for UI state and ensure inFlightRef is reset even on errors.
🧹 Nitpick comments (1)
website/src/components/site-shell.tsx (1)
69-105: Extract indicator measurement into a shared helper.The same geometry block appears in both effects; consolidating it reduces drift risk.
Refactor sketch
-import { useEffect, useLayoutEffect, useRef, useState } from "react"; +import { useCallback, useEffect, useLayoutEffect, useRef, useState } from "react"; @@ + const updateIndicator = useCallback(() => { + const bar = tabBarRef.current; + if (!bar) return; + const activeEl = bar.querySelector<HTMLElement>('.nav-link[data-active="true"]'); + if (!activeEl) return; + const barRect = bar.getBoundingClientRect(); + const r = activeEl.getBoundingClientRect(); + setIndicator({ + left: r.left - barRect.left, + top: r.top - barRect.top, + width: r.width, + height: r.height, + }); + }, []); + useIsoLayoutEffect(() => { - const bar = tabBarRef.current; - if (!bar) return; - const activeEl = bar.querySelector<HTMLElement>( - '.nav-link[data-active="true"]', - ); - if (!activeEl) return; - const barRect = bar.getBoundingClientRect(); - const r = activeEl.getBoundingClientRect(); - setIndicator({ - left: r.left - barRect.left, - top: r.top - barRect.top, - width: r.width, - height: r.height, - }); - }, [active]); + updateIndicator(); + }, [active, updateIndicator]); @@ useEffect(() => { - const onResize = () => { - const bar = tabBarRef.current; - if (!bar) return; - const activeEl = bar.querySelector<HTMLElement>( - '.nav-link[data-active="true"]', - ); - if (!activeEl) return; - const barRect = bar.getBoundingClientRect(); - const r = activeEl.getBoundingClientRect(); - setIndicator({ - left: r.left - barRect.left, - top: r.top - barRect.top, - width: r.width, - height: r.height, - }); - }; - window.addEventListener("resize", onResize); - return () => window.removeEventListener("resize", onResize); - }, []); + window.addEventListener("resize", updateIndicator); + return () => window.removeEventListener("resize", updateIndicator); + }, [updateIndicator]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/site-shell.tsx` around lines 69 - 105, Extract the duplicate geometry calculation into a reusable helper (e.g., measureActiveIndicator) and call it from both useIsoLayoutEffect and the resize handler in useEffect; the helper should accept tabBarRef and return or call setIndicator with the computed {left, top, width, height} (or null if no active element), so replace the repeated blocks in useIsoLayoutEffect and in the onResize function with a single call to measureActiveIndicator to avoid drift and keep logic centralized.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packaging/apt/README.md`:
- Around line 12-14: The README incorrectly states that build-deb.sh consumes a
build/ payload; update the description to say build-deb.sh packages the release
archive's root payload (e.g., binaries placed at
gocciascript-${VERSION}-linux-${arch}/ such as GocciaScriptLoader) into a Debian
package named gocciascript_<version>_<arch>.deb so maintainers know the script
expects binaries at the archive root rather than in a build/ subdirectory.
In `@website/scripts/fetch-nightly-binary.mjs`:
- Around line 101-117: The loop that copies binaries (iterating EXES and using
fname) must fail fast if the essential loader is missing instead of just warning
and continuing: when fname corresponds to the required GocciaScriptLoader
binary, throw an error or exit immediately rather than continue so a partial
install can't succeed without the loader. Also update the cached-binary fallback
detection that checks for "GocciaScriptLoader" to include the Windows filename
variant (e.g., "GocciaScriptLoader.exe" / the same fname used above), ensuring
the cached fallback path recognizes the .exe on Windows; update references to
GocciaScriptLoader in the fallback check to match the fname produced from
EXES/isWindows.
- Around line 39-51: The platform detection currently coerces unknown
architectures to x64 (in the platformAsset logic), causing misleading 404s;
update the platformAsset(p, a) logic to explicitly handle unsupported
architectures by returning/throwing a clear unsupported-platform error (or
returning null) instead of defaulting to "x64" — for example, validate `a` for
darwin and linux (allow only "arm64" and "x64") and for win32 (allow only "ia32"
and "x64") and produce a descriptive error mentioning the OS/arch when an
unsupported value is seen, so callers can fail fast with a clear message.
In `@website/src/components/landing.tsx`:
- Around line 921-923: Update the copy inside the paragraph JSX in the Landing
component (website/src/components/landing.tsx) so the stage count matches the
diagram; change the phrase "One pipeline, four stages." to reflect five stages
(e.g., "One pipeline, five stages.") and ensure any nearby text that references
stage count (the <p> containing "One pipeline, four stages. Click a stage...")
is updated consistently.
- Around line 231-233: The AnimatedOutput replays are being triggered by
copyTick — change AnimatedOutput to use a dedicated run token instead: introduce
a runCounter/runToken state (e.g., runToken) in the component that is
incremented only when an actual run starts/completes (where you currently handle
runs), and replace the runKey prop expression (now using copyTick) with runToken
+ (running ? 1 : 0) + output.length so copy success flashing no longer
retriggers animation; update the run-triggering code to increment runToken at
the appropriate run boundary.
In `@website/src/components/sandbox.tsx`:
- Around line 536-576: The code currently pushes a success meta line ("— exit
0") whenever data.error is absent, which can be misleading for non-2xx
responses; change the logic that adds the final meta entry to only emit the
success message when the HTTP response was successful (use res.ok) and otherwise
emit a failure/meta entry indicating the non-OK HTTP status (or omit the exit
0). Concretely, in the block that computes totalMs and pushes into lines
(referencing res, data, totalMs, and lines), gate the `— exit 0` branch on
res.ok and handle the not-res.ok case by adding an appropriate error/meta
message (e.g., include res.status or a generic failure) instead of reporting
exit 0.
- Around line 426-443: The regex IDENTIFIER_RE in buildScript allows JavaScript
identifiers but doesn't block reserved keywords (e.g., "default", "class",
"static"), causing invalid `const <keyword> = ...;` syntax; fix it by adding a
RESERVED_WORDS set (covering ECMAScript keywords and future/reserved words like
break, case, class, const, default, enum, export, extends, import, super,
static, let, yield, await, etc.) and update the validation in buildScript to
reject any key that either fails IDENTIFIER_RE or is in RESERVED_WORDS, throwing
the same styled error (`globals key "..." is not a valid JavaScript identifier`)
so callers surface the message in the sandbox console.
In `@website/src/components/site-shell.tsx`:
- Around line 159-172: The theme toggle button is stateful but missing an
accessible pressed state; add an aria-pressed boolean attribute on the button
that reflects the current theme (e.g., aria-pressed={theme === "espresso"} or
the inverse depending on which you consider "pressed") so screen readers know
its on/off state, and ensure this value updates when you call setTheme and
userOverrideRef.current is set in the onClick handler; reference the existing
button with className "nav-mode", the theme variable, and the
setTheme/userOverrideRef usage when making the change.
---
Duplicate comments:
In `@website/src/components/landing.tsx`:
- Around line 86-92: The run function's re-entry guard uses React state
(running) which is async — create a synchronous ref (e.g., inFlightRef =
useRef(false)) and use inFlightRef.current as the immediate guard inside run
(check it at function start, set it true before starting the async request and
reset it to false in a finally block), keep setRunning(running) for UI only, and
also gate the textarea shortcut handler on the same inFlightRef.current so both
the button and the ⌘/Ctrl+Enter shortcut use the identical synchronous in-flight
guard; update references to running/setRunning only for UI state and ensure
inFlightRef is reset even on errors.
---
Nitpick comments:
In `@website/src/components/site-shell.tsx`:
- Around line 69-105: Extract the duplicate geometry calculation into a reusable
helper (e.g., measureActiveIndicator) and call it from both useIsoLayoutEffect
and the resize handler in useEffect; the helper should accept tabBarRef and
return or call setIndicator with the computed {left, top, width, height} (or
null if no active element), so replace the repeated blocks in useIsoLayoutEffect
and in the onResize function with a single call to measureActiveIndicator to
avoid drift and keep logic centralized.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 86ee4a0a-e750-4ef6-b96d-36092c4ada97
📒 Files selected for processing (11)
packaging/apt/README.mdpackaging/apt/build-deb.shpackaging/homebrew/README.mdpackaging/homebrew/gocciascript.rbwebsite/scripts/fetch-nightly-binary.mjswebsite/src/app/globals.csswebsite/src/app/layout.tsxwebsite/src/components/landing.tsxwebsite/src/components/sandbox.tsxwebsite/src/components/site-shell.tsxwebsite/vercel.json
✅ Files skipped from review due to trivial changes (2)
- website/vercel.json
- packaging/apt/build-deb.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- packaging/homebrew/gocciascript.rb
fetch-nightly-binary.mjs:
- Stop coercing unknown architectures to x64 — explicit error per OS
with the offending arch in the message, so a host outside the
release matrix fails with "unsupported macOS arch: …" instead of a
misleading 404 from GitHub releases.
- Hard-fail when GocciaScriptLoader is missing after extraction. It's
the only binary `/api/run` actually invokes; missing it makes the
build pass but the playground 500 at runtime.
- Cached-fallback detection now also checks `GocciaScriptLoader.exe`
so the "keep previously vendored binaries" path works on Windows.
sandbox.tsx:
- Reject reserved words as globals keys. The identifier regex
accepted `default`, `class`, `static`, etc., but `const default =
…;` is still a SyntaxError before user code runs. Added an explicit
`RESERVED_BINDING_WORDS` set covering both unconditionally
reserved and strict-mode-only future-reserved (we run in module
scope, which is implicitly strict).
- Respect HTTP status. Previously a non-2xx response without
`data.error` rendered "exit 0", which is misleading. Now we gate
the success branch on `res.ok`, surface `data.error.message` (or
the status line as fallback) on the failure path, and `.catch()`
the JSON parse so a malformed body doesn't throw us into the
network-error branch.
landing.tsx:
- Drive `<AnimatedOutput>` from a dedicated `runTick` instead of
reusing `copyTick`. The latter changes on copy success and again
when the "copied" flash resets, so the console output was
re-animating on every copy click.
- Fix stage-count copy: "four stages" → "five stages" to match the
diagram (Source / Lexer / Parser / Compiler / VM = 5).
site-shell.tsx:
- Add `aria-pressed={theme === "espresso"}` on the theme toggle so
screen readers announce its on/off state. `aria-label` already
describes the action; `aria-pressed` describes the current state.
packaging/apt/README.md:
- Drop the stale `build/` reference in the runbook. Replaced with
the actual archive layout
(`gocciascript-<version>-linux-<arch>/{GocciaScriptLoader,…}`) so
maintainers don't follow a path that doesn't exist.
Verified: bun run lint, tsc --noEmit, SKIP_VENDOR_FETCH=1 bun run
build, bun test (71 pass).
There was a problem hiding this comment.
🧹 Nitpick comments (2)
website/src/components/site-shell.tsx (1)
69-105: Consolidate indicator measurement and observe layout-driven size changes.The indicator measurement logic is duplicated, and recalculation currently depends on route changes +
window.resize. Extracting a shared measurer and observing nav size/layout changes will reduce drift and duplication.Refactor sketch
-import { useEffect, useLayoutEffect, useRef, useState } from "react"; +import { useCallback, useEffect, useLayoutEffect, useRef, useState } from "react"; @@ - useIsoLayoutEffect(() => { + const measureIndicator = useCallback(() => { const bar = tabBarRef.current; if (!bar) return; const activeEl = bar.querySelector<HTMLElement>( '.nav-link[data-active="true"]', ); if (!activeEl) return; const barRect = bar.getBoundingClientRect(); const r = activeEl.getBoundingClientRect(); setIndicator({ left: r.left - barRect.left, top: r.top - barRect.top, width: r.width, height: r.height, }); - }, [active]); + }, []); + + useIsoLayoutEffect(() => { + measureIndicator(); + }, [active, measureIndicator]); @@ - useEffect(() => { - const onResize = () => { - const bar = tabBarRef.current; - if (!bar) return; - const activeEl = bar.querySelector<HTMLElement>( - '.nav-link[data-active="true"]', - ); - if (!activeEl) return; - const barRect = bar.getBoundingClientRect(); - const r = activeEl.getBoundingClientRect(); - setIndicator({ - left: r.left - barRect.left, - top: r.top - barRect.top, - width: r.width, - height: r.height, - }); - }; - window.addEventListener("resize", onResize); - return () => window.removeEventListener("resize", onResize); - }, []); + useEffect(() => { + window.addEventListener("resize", measureIndicator); + const bar = tabBarRef.current; + const ro = + bar && "ResizeObserver" in window + ? new ResizeObserver(() => measureIndicator()) + : null; + if (bar && ro) ro.observe(bar); + return () => { + window.removeEventListener("resize", measureIndicator); + ro?.disconnect(); + }; + }, [measureIndicator]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/site-shell.tsx` around lines 69 - 105, Duplicate indicator measurement in useIsoLayoutEffect and the resize useEffect should be consolidated into a single measurer and switched to observing layout changes; extract a shared function (e.g., measureIndicator) that reads tabBarRef, queries '.nav-link[data-active="true"]', computes left/top/width/height and calls setIndicator, then call that function from useIsoLayoutEffect (on active) and replace the window "resize" listener with a ResizeObserver attached to tabBarRef.current that invokes measureIndicator on changes and is properly disconnected in cleanup.website/src/components/landing.tsx (1)
598-643: Consider addingrole="tabpanel"for complete ARIA tab semantics.The tablist and tab roles are correctly applied, but the content area conditionally rendered below lacks the complementary
role="tabpanel"andaria-labelledbyattributes. This would improve screen reader navigation.♿ Proposed enhancement for tab accessibility
</button> </div> {resultView === "console" ? ( - <AnimatedOutput - className="arch-anim-output" - runKey={`console-${active}`} - lines={CONSOLE_OUTPUT.split("\n").map((text) => ({ - kind: "out" as const, - text, - }))} - /> + <div role="tabpanel" aria-labelledby="tab-console"> + <AnimatedOutput + className="arch-anim-output" + runKey={`console-${active}`} + lines={CONSOLE_OUTPUT.split("\n").map((text) => ({ + kind: "out" as const, + text, + }))} + /> + </div> ) : ( - <NumberedCode - className="arch-detail-anim" - lineCount={RESULT.split("\n").length} - > - <HighlightedGeneric code={RESULT} language="json" /> - </NumberedCode> + <div role="tabpanel" aria-labelledby="tab-json"> + <NumberedCode + className="arch-detail-anim" + lineCount={RESULT.split("\n").length} + > + <HighlightedGeneric code={RESULT} language="json" /> + </NumberedCode> + </div> )}And add
id="tab-console"/id="tab-json"to the corresponding tab buttons at lines 604 and 614.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/landing.tsx` around lines 598 - 643, Add proper ARIA tabpanel semantics: give the two tab buttons unique ids (e.g., "tab-console" and "tab-json" on the buttons that call setResultView) and add role="tabpanel" plus aria-labelledby referring to the corresponding button id on the conditionally rendered content containers (the AnimatedOutput wrapper when resultView === "console" and the NumberedCode wrapper when resultView === "json"); ensure the id values match the aria-labelledby values so screen readers associate each panel with its tab.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@website/src/components/landing.tsx`:
- Around line 598-643: Add proper ARIA tabpanel semantics: give the two tab
buttons unique ids (e.g., "tab-console" and "tab-json" on the buttons that call
setResultView) and add role="tabpanel" plus aria-labelledby referring to the
corresponding button id on the conditionally rendered content containers (the
AnimatedOutput wrapper when resultView === "console" and the NumberedCode
wrapper when resultView === "json"); ensure the id values match the
aria-labelledby values so screen readers associate each panel with its tab.
In `@website/src/components/site-shell.tsx`:
- Around line 69-105: Duplicate indicator measurement in useIsoLayoutEffect and
the resize useEffect should be consolidated into a single measurer and switched
to observing layout changes; extract a shared function (e.g., measureIndicator)
that reads tabBarRef, queries '.nav-link[data-active="true"]', computes
left/top/width/height and calls setIndicator, then call that function from
useIsoLayoutEffect (on active) and replace the window "resize" listener with a
ResizeObserver attached to tabBarRef.current that invokes measureIndicator on
changes and is properly disconnected in cleanup.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d612986c-d883-4ea0-a725-dbbaa7915261
📒 Files selected for processing (5)
packaging/apt/README.mdwebsite/scripts/fetch-nightly-binary.mjswebsite/src/components/landing.tsxwebsite/src/components/sandbox.tsxwebsite/src/components/site-shell.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- website/scripts/fetch-nightly-binary.mjs
landing.tsx (compiler-pipeline Result tabs): - Wire `role="tab"` buttons and `role="tabpanel"` containers together with `id` / `aria-controls` / `aria-labelledby` via `useId()` so screen readers associate each panel with its controlling tab. Previously the tabs only had `role="tab"` + `aria-selected` and the panels were bare divs — the relationship was implicit. site-shell.tsx (active-tab indicator): - Extract the duplicated measurement logic from the layout-effect and the resize-effect into a single `measureIndicator` (memoised with `useCallback([])` so its identity is stable and the observer effect doesn't churn). - Replace the `window.resize` listener with a `ResizeObserver` on the nav bar. Catches reflows that don't fire `resize` — font-load shifts, the GitHub stars badge populating after `fetchStars()`, hamburger menu open/close — all of which can move the active link out from under the indicator. - Guard `typeof ResizeObserver === "undefined"` so older browsers degrade to "indicator stays put on resize" rather than throwing.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
website/src/components/site-shell.tsx (1)
205-208: Decouple indicator width from height in inline style.The width fallback is currently keyed to
indicator.height, which is unrelated and makes this harder to reason about. Prefer usingindicator.widthdirectly (or guarding on width itself).♻️ Small cleanup
style={{ transform: `translate(${indicator.left}px, ${indicator.top}px)`, - width: indicator.height ? indicator.width : 0, + width: indicator.width, height: indicator.height, }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/site-shell.tsx` around lines 205 - 208, The inline style for the indicator in site-shell.tsx currently sets width using indicator.height ? indicator.width : 0 which couples width to an unrelated property; change the width expression to depend on indicator.width (e.g., use indicator.width ?? 0 or indicator.width ? indicator.width : 0) or explicitly guard on typeof indicator.width === 'number' so the style uses the actual width value instead of the height.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@website/src/components/landing.tsx`:
- Around line 868-880: The slug generation for each EXCLUDED item can collide
(e.g., "==" and "!=" both become "-"), causing duplicate tooltip IDs; update the
code in the EXCLUDED.map callback (where slug is computed and used in
aria-describedby and id `ex-tip-${slug}`) to produce a guaranteed-unique
identifier — for example append the map index or a short hash (e.g., `-${index}`
or similar) to the slug or use a stable unique key derived from x.name plus
index; ensure both the aria-describedby attribute and the span id are updated in
tandem so each button and tooltip pair remains uniquely linked.
---
Nitpick comments:
In `@website/src/components/site-shell.tsx`:
- Around line 205-208: The inline style for the indicator in site-shell.tsx
currently sets width using indicator.height ? indicator.width : 0 which couples
width to an unrelated property; change the width expression to depend on
indicator.width (e.g., use indicator.width ?? 0 or indicator.width ?
indicator.width : 0) or explicitly guard on typeof indicator.width === 'number'
so the style uses the actual width value instead of the height.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 88ddf434-b02e-4049-bd34-29c16b26536b
📒 Files selected for processing (2)
website/src/components/landing.tsxwebsite/src/components/site-shell.tsx
landing.tsx — EXCLUDED row tooltips: - The slug is computed as `x.name.replace(/[^A-Za-z0-9]+/g, "-")`, which collapses operator-only names to identical slugs. `"=="` and `"!="` both reduce to `"-"`, so two buttons end up with `aria-describedby="ex-tip--"` pointing at a duplicate `id`. Screen readers fall back to the first match, which means one of the two pills announces the wrong tooltip body. - Append the map index to the slug so each pair gets a unique id regardless of how aggressively the regex strips the name. Stash the resulting id in a local so the `aria-describedby` and the `<span id>` can't drift. site-shell.tsx — active-tab indicator inline style: - The width was being gated on `indicator.height` rather than its own axis (`width: indicator.height ? indicator.width : 0`). Functionally the ternary is also dead — `IndicatorRect` declares all four fields as required `number`s, the outer `indicator && (...)` already gates rendering, and `getBoundingClientRect()` on a rendered nav link always has non-zero height. Drop the ternary and read each axis directly with a comment noting why no gating is needed.
There was a problem hiding this comment.
🧹 Nitpick comments (3)
website/src/components/landing.tsx (3)
51-51: Consider importingOutputLinefrom the shared module.This local type duplicates the one exported from
@/lib/examples(used byformatError). Importing it would avoid drift if the canonical definition changes.♻️ Suggested change
+import type { OutputLine } from "@/lib/examples"; import { formatError } from "@/lib/format-error"; - -type OutputLine = { kind: "meta" | "out" | "err"; text: string };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/landing.tsx` at line 51, The local type alias OutputLine duplicates the canonical type exported from the shared module used by formatError; replace the local declaration of OutputLine with an import from "@/lib/examples" (import { OutputLine } from "@/lib/examples") and remove the duplicate type to prevent drift and keep a single source of truth for the OutputLine definition used by formatError and this component.
150-154: Harden the error message extraction.
fetchtypically throwsErrorobjects, but the cast could fail forAbortErroredge cases or mocked environments where non-Error values are thrown.🛡️ Suggested fix
} catch (err) { setOutput([ { kind: "meta", text: banner }, - { kind: "err", text: `network error: ${(err as Error).message}` }, + { kind: "err", text: `network error: ${err instanceof Error ? err.message : String(err)}` }, ]); } finally {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/landing.tsx` around lines 150 - 154, The catch block in landing.tsx uses a direct cast (err as Error).message which can fail for AbortError or non-Error throws; change it to normalize the error message before calling setOutput (e.g., create a small helper or inline logic that checks err instanceof Error ? err.message : typeof err === "string" ? err : String(err) and fallbacks to a safe value), then call setOutput([{ kind: "meta", text: banner }, { kind: "err", text: `network error: ${message}` }]); reference setOutput and banner when applying the fix.
787-794: Consider using<Link>for the internal/installroute.Using
<a target="_blank">for an internal route bypasses Next.js client-side navigation and opens a new tab. If the intent is to keep the landing page open while viewing the install script, this is fine — but if it's just a documentation page,<Link>would provide a smoother UX.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@website/src/components/landing.tsx` around lines 787 - 794, The anchor element linking "/install" (the <a href="/install" target="_blank" rel="noopener noreferrer" className="link-button text-[0.85rem] text-ink-3"> in landing.tsx) should use Next.js client-side navigation: import Link from 'next/link' and replace the external-style anchor with a <Link> to="/install" (keeping the same className for styling) and remove target and rel attributes; if you must open in a new tab keep the <a target="_blank" rel="noopener noreferrer"> but otherwise use <Link> (or <Link> wrapping the <a> if you prefer) to enable SPA navigation and a smoother UX.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@website/src/components/landing.tsx`:
- Line 51: The local type alias OutputLine duplicates the canonical type
exported from the shared module used by formatError; replace the local
declaration of OutputLine with an import from "@/lib/examples" (import {
OutputLine } from "@/lib/examples") and remove the duplicate type to prevent
drift and keep a single source of truth for the OutputLine definition used by
formatError and this component.
- Around line 150-154: The catch block in landing.tsx uses a direct cast (err as
Error).message which can fail for AbortError or non-Error throws; change it to
normalize the error message before calling setOutput (e.g., create a small
helper or inline logic that checks err instanceof Error ? err.message : typeof
err === "string" ? err : String(err) and fallbacks to a safe value), then call
setOutput([{ kind: "meta", text: banner }, { kind: "err", text: `network error:
${message}` }]); reference setOutput and banner when applying the fix.
- Around line 787-794: The anchor element linking "/install" (the <a
href="/install" target="_blank" rel="noopener noreferrer" className="link-button
text-[0.85rem] text-ink-3"> in landing.tsx) should use Next.js client-side
navigation: import Link from 'next/link' and replace the external-style anchor
with a <Link> to="/install" (keeping the same className for styling) and remove
target and rel attributes; if you must open in a new tab keep the <a
target="_blank" rel="noopener noreferrer"> but otherwise use <Link> (or <Link>
wrapping the <a> if you prefer) to enable SPA navigation and a smoother UX.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3ccfd401-31fc-40ba-8743-27cfe591edca
📒 Files selected for processing (2)
website/src/components/landing.tsxwebsite/src/components/site-shell.tsx
- Drop the local `OutputLine` alias and import the canonical one from `@/lib/examples`. The local was a strict subset of the shared 5-kind type and the static `formatError` mapping already narrows kinds we don't render — single source of truth, zero behaviour change. - Replace the blind `(err as Error).message` cast in the run() catch with a 3-step normaliser (`Error` → `.message`, string → as-is, otherwise `String(err)`). Avoids "network error: undefined" if a runtime ever throws a non-Error, and keeps the existing AbortError / TypeError / SyntaxError messages intact. - Swap the `<a href="/install" target="_blank">` for `<Link>`. The route is an internal Next.js page, so soft nav + prefetch apply; we keep `target="_blank"` because the UX is "peek at the script in a new tab without losing landing-page state".
Summary
Adds the GocciaScript marketing/docs website under
website/. New top-level surface — does not modify the engine.Site sections
../docs/(synced at build time viascripts/sync-docs.mjs). Custom markdown parser supports Pascal, JS, TS, C++, C#, Crystal, JSON syntax highlighting; Mermaid diagrams (dynamic-imported so the ~1MB payload only ships on pages that need it); anchor-link headings; cross-doc internal links; mobile-collapsible sidebar.GocciaScriptLoaderfrom/api/run(rate-limited per-IP, code/output size caps, timeout, memory limit, instruction limit, fetch host allowlist). Editor features: ghost-text autocomplete, smart bracket pairing (auto-close, step-over, pair-backspace), auto-indent on Enter (preserves leading whitespace + adds indent after{/[/(, splits matched pairs with the caret on the indented middle line), ASI toggle, share-link round-trip (code + backend + ASI). Version selector reads recent GitHub releases and shows precedence-ordered pins (latest patch, previous minor cycle's latest patch, previous major's latest patch — with major falling back to a previous-minor pin when no major has shipped yet).Stack: Next.js 16 (App Router, Turbopack) · React 19 · Tailwind v4 · Biome · Bun.
Repo plumbing tweaks (kept minimal so the website work doesn't block other CI):
scripts/check-doc-{links,length,duplication}.tsand.markdownlint-cli2.jsoncgrow a small ignore list forwebsite/content/docs/**(build-time synced copies ofdocs/— already validated at the source location), nestednode_modules, and thewebsite/CLAUDE.mdClaude-Code@AGENTS.mdinclude shim. Without these, the doc checkers walk the synced copies and report the original docs' relative links as broken (because the relative paths only resolve in their original location), andmarkdownlint-cli2reports lint errors inwebsite/node_modules/'s vendored READMEs.Non-goals
Object.isFrozenreads internal flag instead of runningTestIntegrityLevelthis; tests don't cover it; warning misdirects #405 — decorator wrappers +function-expression silent-degrade + test gapTemporal.*.until/*.sinceignore theoptionsargumentTesting
bun run lint, 42 files)bunx tsc --noEmit)bun run build, 38 routes generated, all/docs/[id]SSG pages)check-doc-links254 links,check-doc-symbols576 refs,check-doc-length44 files,check-doc-duplication0 exact + 3 fuzzy warnings unchanged from main,markdownlint-cli239 files / 0 errors)../docs/at build time (single source of truth lives there, per thewebsite/.gitignorecomment).website/README.mddescribes the site's role and thesync-docs.mjsflow.