feat: content model, state machine, admin UI, and test infrastructure#3
feat: content model, state machine, admin UI, and test infrastructure#3flyingrobots merged 10 commits intomainfrom
Conversation
Rewrite public/index.html as a full CMS dashboard with sidebar tabs
(Drafts/Live), split markdown editor with live preview, metadata editing,
drag-drop uploads, toast notifications, keyboard shortcuts, and autosave.
Fix three bugs: fetchList now sends kind=articles instead of kind=draft,
save() sends {slug, title, body, trailers} instead of {slug, message},
and all POST requests include Content-Type: application/json header.
Fix CmsService compatibility with npm-published @git-stunts packages:
update imports for plumbing ShellRunner and trailer-codec, replace
empty-graph dependency with direct git plumbing calls, add missing
await on async server handlers, and add SVG MIME type to static server.
Add DI seam to CmsService so tests can inject an InMemoryGraphAdapter instead of forking real git subprocesses. Unit tests now run in ~11ms vs hundreds of ms. Real-git smoke tests moved to test/git-e2e.test.js, excluded from default runs.
Adds markdown preview (split/edit/preview modes), autosave, toast notifications, skeleton loading, drag-and-drop uploads, metadata editor, keyboard shortcuts, and dark mode token system.
- Add inline descriptions for demo/quickstart scripts in README - Correct "all scripts are tested" claim in scripts/README (only setup.sh has BATS coverage) - Convert bare file paths to proper markdown links
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
Summary by CodeRabbit
WalkthroughAdds a content state machine module and DI/graph mode to CmsService, implements unpublish/revert lifecycle (service methods, server endpoints, CLI commands), replaces the admin UI with a client-side SPA editor, expands tests (in-memory state-machine + real-git E2E), and adds related scripts, styles, and docs. CmsService constructor now accepts an optional Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Server as Server
participant Cms as CmsService
participant Backend as Graph/Git Backend
Note over Client,Server: Unpublish flow
Client->>Server: POST /api/cms/unpublish { slug }
Server->>Cms: unpublishArticle({ slug })
activate Cms
Cms->>Cms: _resolveArticleState(slug)
Cms->>Cms: validateTransition(PUBLISHED, UNPUBLISHED)
Cms->>Backend: delete published ref
Cms->>Backend: update draft ref via _updateRef
deactivate Cms
Cms-->>Server: { slug, sha, state }
Server-->>Client: 200 { result }
Note over Client,Server: Revert flow
Client->>Server: POST /api/cms/revert { slug }
Server->>Cms: revertArticle({ slug })
activate Cms
Cms->>Cms: _resolveArticleState(slug)
Cms->>Backend: read parent SHA/content
Cms->>Backend: create new draft (REVERTED) via _updateRef
deactivate Cms
Cms-->>Server: { slug, sha, state }
Server-->>Client: 200 { result }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 924184313a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
src/lib/CmsService.js
Outdated
| validateTransition(effectiveState, STATES.UNPUBLISHED); | ||
|
|
||
| // Delete the published ref | ||
| await this.graph.deleteRef(pubRef); |
There was a problem hiding this comment.
Preserve published ref until unpublish commit succeeds
unpublishArticle deletes the live published ref before building and committing the replacement draft snapshot. If showNode, commitNode, or _updateRef fails afterward (for example due to a transient git/storage error), the API returns an error even though the article has already been removed from published refs, causing partial state corruption and unintended content takedown.
Useful? React with 👍 / 👎.
| const { effectiveState, draftSha } = await this._resolveArticleState(canonicalSlug); | ||
| validateTransition(effectiveState, STATES.REVERTED); | ||
|
|
||
| const info = await this.graph.getNodeInfo(draftSha); |
There was a problem hiding this comment.
Reject revert requests when no draft commit exists
revertArticle calls getNodeInfo(draftSha) without first validating that draftSha is present. For slugs without an articles/<slug> ref (e.g., never saved, or published via explicit sha only), draftSha is null, so this path can throw a backend error and return 500 instead of a structured validation response.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 924184313a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
src/lib/CmsService.js
Outdated
| await this.graph.deleteRef(pubRef); | ||
|
|
||
| // Read current draft content and re-commit with status: unpublished | ||
| const message = await this.graph.showNode(draftSha); |
There was a problem hiding this comment.
Preserve published ref until unpublish commit succeeds
Deleting pubRef before creating/updating the unpublished draft makes unpublishArticle non-recoverable on write failures: if commitNode or the later draft ref update throws (for example a signing failure when CMS_SIGN=1), the live ref is already removed and the article now resolves as draft, so retrying unpublish is blocked by invalid_state_transition and requires manual git repair. Commit and repoint the draft first (or add rollback) so a failed unpublish cannot silently drop live content.
Useful? React with 👍 / 👎.
public/index.html
Outdated
| clearTimeout(previewTimer); | ||
| previewTimer = setTimeout(() => { | ||
| const body = document.getElementById('bodyInput').value; | ||
| document.getElementById('previewPane').innerHTML = marked.parse(body || ''); |
There was a problem hiding this comment.
Sanitize markdown preview HTML before assigning innerHTML
updatePreview writes marked.parse(body) directly to innerHTML without sanitization, so a markdown payload containing raw HTML (for example an element with an event handler) can execute JavaScript when an editor loads or previews that article. This creates a stored XSS path in the admin UI where repository content can run code in an operator’s browser.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 23
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/lib/CmsService.js (3)
177-182:⚠️ Potential issue | 🔴 CriticalInconsistent trailer key casing:
updatedAthere vsupdatedatinunpublishArticle/revertArticle.Line 181 writes the trailer key as
updatedAt(camelCase). Lines 239 and 277 write it asupdatedat(all lowercase). These are distinct JS object keys, so any downstream consumer checkingtrailers.updatedAtwill silently getundefinedfor articles touched by unpublish or revert, and vice versa. Pick one form and use it everywhere.🐛 Proposed fix — normalize to `updatedat` everywhere (matching lowercase trailer convention)
const finalTrailers = { ...safeTrailers, contentid: identity.contentId, status: 'draft', - updatedAt: new Date().toISOString(), + updatedat: new Date().toISOString(), };
178-181: 🛠️ Refactor suggestion | 🟠 MajorUses string literal
'draft'instead ofSTATES.DRAFTconstant.You import
STATESon line 14 and use it religiously everywhere else (STATES.UNPUBLISHED,STATES.REVERTED), yet line 180 hard-codes'draft'. One typo and this silently breaks the state machine.♻️ Proposed fix
const finalTrailers = { ...safeTrailers, contentid: identity.contentId, - status: 'draft', + status: STATES.DRAFT, updatedat: new Date().toISOString(), };
198-217: 🧹 Nitpick | 🔵 Trivial
publishArticleresolves state after readingtargetSha, creating a TOCTOU window.Lines 203–204 read
targetShafrom the draft ref. Lines 207–208 then resolve the article state and validate the transition. Between these two operations, another concurrent request could mutate the draft ref. In production mode_updateRefuses CAS to mitigate this, but the validation itself could pass on stale state. Consider resolving state first (or atomically with the read).src/server/index.js (1)
128-202: 🛠️ Refactor suggestion | 🟠 MajorFive POST handlers with identical body-accumulation boilerplate — extract a helper.
snapshot,publish,unpublish,revert, anduploadall repeat the samelet body = ''; req.on('data', ...); req.on('end', async () => { try { ... } catch { ... } })skeleton. This PR adds two more copies. Extract areadJsonBody(req)helper that returns aPromise<object>and handles the accumulation + parse + size limit in one place.♻️ Example helper
const MAX_BODY_BYTES = 1_048_576; // 1 MiB function readJsonBody(req) { return new Promise((resolve, reject) => { let body = ''; let size = 0; req.on('data', (chunk) => { size += chunk.length; if (size > MAX_BODY_BYTES) { req.destroy(); return reject(Object.assign(new Error('Request body too large'), { statusCode: 413 })); } body += chunk; }); req.on('end', () => { try { resolve(JSON.parse(body || '{}')); } catch (e) { reject(Object.assign(new Error('Invalid JSON'), { statusCode: 400 })); } }); req.on('error', reject); }); }This also addresses the pre-existing unbounded body accumulation across all POST endpoints — a trivially exploitable DoS vector.
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Around line 1-33: The changelog has an [Unreleased] link anchor but no link
definition; add a Keep-a-Changelog style reference link for [Unreleased] at the
bottom of CHANGELOG.md (e.g., `[Unreleased]: <compare-URL>`), ensuring the
anchor `[Unreleased]` in the header resolves (use your repo compare URL such as
main...HEAD or the appropriate base branch).
In `@public/index.html`:
- Around line 904-914: The global keydown listener unconditionally calls
UI.deselect on Escape and should ignore keyboard events originating from form
controls or editable elements; update the listener registered with
document.addEventListener to early-return when e.target (or
document.activeElement) is an input, textarea, select, or an element with
contentEditable=true (and also when e.defaultPrevented) so UI.deselect only runs
for non-editing contexts, or alternatively attach the handler to the main app
container instead of document to scope it; reference the existing listener,
UI.deselect, state.currentSlug, and the 'slugInput' element when making the
change.
- Around line 858-867: markDirty() currently clears and resets
state.autosaveTimer to call save() after 3s, but if save() takes longer than 3s
the next autosave fires while a previous save is in-flight and is silently
dropped by the state.saving guard, risking lost edits; update save() to, in its
finally block, check state.dirty and re-arm state.autosaveTimer (using the same
timer duration) when a save completed while still dirty (or alternatively
increase the debounce to 5–10s by changing the timeout value used in
markDirty()), and ensure you use the same symbols state.autosaveTimer,
state.saving, state.dirty, markDirty(), and save() when making the change.
- Line 7: The external CDN scripts (e.g., the marked script
"https://cdn.jsdelivr.net/npm/marked@15/marked.min.js" and the Open Props unpkg
scripts referenced on lines 9–11) are included without Subresource Integrity;
update each external <script> and <link> tag to include a correct
integrity="..." SRI hash and crossorigin="anonymous" attribute, or replace those
CDN references by vendoring the files locally and serving them from your repo;
ensure the integrity values match the exact asset contents (regenerate hashes if
you change the files) and add crossorigin="anonymous" alongside each integrity
attribute.
- Around line 924-942: The drop handler is registered on document causing
uploads when files are dropped anywhere; scope it to the actual element to match
the visual feedback. Replace document.addEventListener('drop', ...) with
dropZone?.addEventListener('drop', ...) (guarding for null), keep the
e.preventDefault() and dropZone.classList.remove('dragover'), read files from
e.dataTransfer and call UI.handleUpload(f) as before, and remove the global
document drop listener so only drops on the element referenced by dropZone
trigger uploads; adjust the dragover/dragleave handlers if needed to also be
attached to dropZone rather than document for consistent UX.
- Around line 588-597: The code injects unescaped server data into innerHTML: in
the items.forEach loop where li.innerHTML = `<span
class="slug">${item.slug}</span>`, replace this unsafe insertion by setting a
safe DOM/text value (e.g., use li.textContent or create a span element and set
its textContent or run item.slug through the existing escAttr helper) so the
slug is escaped before rendering; update the block that creates li (in the same
forEach that also sets li.onclick = () => this.loadArticle(item.slug)) to create
a span element, set span.textContent = item.slug (or escAttr(item.slug)), append
the span to li, and remove direct use of innerHTML.
- Around line 748-762: The renderBadges method is injecting untrusted trailer
values (statusVal and updated) via row.innerHTML causing an XSS risk; replace
the HTML string interpolation with DOM construction: clear row, for statusVal
create a span element, set span.className to "badge" plus the computed cls and
set span.textContent = statusVal, append it to row; for updated create a span
with className "badge badge-info" and set its textContent to "Updated: " +
(relTime(updated) || updated), then append—do not use innerHTML or string
concatenation with trailer values in renderBadges.
- Around line 821-838: The current reader.onload handler assumes e.target.result
contains a comma-separated data URL and does b64 =
e.target.result.split(',')[1]; guard this by validating e?.target?.result is a
string and that result.includes(',') and split(',')[1] is truthy before calling
api.upload; if the base64 segment is missing, show a friendly toast error (or
reject) instead of sending "undefined", and avoid proceeding to insert the image
or call updatePreview; update references: reader.onload, e.target.result, b64,
and api.upload.
- Around line 790-797: In removeTrailerRow, the code attempts to remove the
remove-button by positional index after already deleting elements by data-idx,
which uses a stale positional index; delete the statement that calls
grid.querySelectorAll('.remove-btn')[idx]?.remove() and rely on re-collecting
state via this.collectTrailers() and this.renderTrailers() to update the DOM
correctly, ensuring no other code depends on that positional removal.
- Around line 548-554: The updatePreview function currently assigns untrusted
output from marked.parse(...) directly to previewPane.innerHTML, causing stored
XSS; sanitize the generated HTML before assigning it by importing/including
DOMPurify (or configuring marked's sanitizer) and replace the assignment with
something like DOMPurify.sanitize(marked.parse(body || '')) so
previewPane.innerHTML receives only sanitized HTML; locate the updatePreview
function and the innerHTML assignment and wrap the marked.parse result with
DOMPurify.sanitize (or equivalent configured sanitizer).
In `@src/lib/CmsService.js`:
- Around line 89-106: _resolveArticleState re-canonicalizes an already-canonical
slug (calling canonicalizeSlug on every hot-path call); remove that redundant
normalization and use the incoming slug directly. Edit _resolveArticleState to
stop calling canonicalizeSlug (delete the canonicalizeSlug(canonicalSlug) call
and use the slug parameter when computing draftRef/pubRef via _refFor), leaving
the rest (graph.readRef, graph.showNode, codec.decode, resolveEffectiveState)
unchanged; ensure callers continue to pass a canonicalized slug (or add a short
comment if you decide to keep the defensive call instead).
- Around line 66-72: The _updateRef implementation drops oldSha when calling
this.graph.updateRef, which removes compare-and-swap safety present in
this.repo.updateRef; update _updateRef to preserve CAS semantics by calling
this.graph.updateRef with the oldSha when the graph adapter supports it (e.g.,
pass { ref, newSha, oldSha } or call a CAS-specific method on the adapter), and
if the InMemoryGraphAdapter does not support CAS, either throw an explicit error
or add a clear comment and a defensive runtime check (e.g., detect absence of
CAS API on this.graph and log/throw) so tests do not silently hide
concurrent-write issues—refer to the _updateRef method and the
InMemoryGraphAdapter implementation when making the change.
- Around line 120-148: The DI path in listArticles causes N+1 latency by
awaiting this.graph.readRef(ref) in a loop; fix it by replacing the sequential
reads with a parallel approach: after obtaining refs from
this.graph.listRefs(ns) map them to promises (e.g., this.graph.readRef(ref) for
each) and await Promise.all to collect SHAs, or if the graph adapter exposes a
batch API (e.g., readRefs or readMultiple) call that instead; then build results
using refs and the corresponding SHAs and return the array (update symbols:
listArticles, this.graph.listRefs, this.graph.readRef, results, ns).
- Around line 33-60: In DI mode the constructor leaves this.cas and this.vault
null causing uploadAsset to NPE; fix by either requiring and wiring CAS/Vault
stubs from the caller or adding explicit guards: update the constructor({ cwd,
refPrefix, graph, cas, vault }) to set this.cas = cas || null and this.vault =
vault || null (or pull CAS/Vault adapters from the provided
GraphPersistencePort/InMemoryGraphAdapter when available), and/or modify
uploadAsset to check for this.cas and this.vault at the top and throw a clear
Error like "CmsService.uploadAsset requires cas and vault to be provided in DI
mode" instead of letting a TypeError surface; reference the constructor,
uploadAsset, cas, vault, and GraphPersistencePort/InMemoryGraphAdapter when
making the change.
- Around line 222-250: unpublishArticle (and similarly revertArticle) must guard
against draftSha being null before calling this.graph.showNode or
this.graph.getNodeInfo; update unpublishArticle and revertArticle to check the
resolved draftSha (from _resolveArticleState) and if null throw a clear,
descriptive error (e.g., "Missing draft SHA for <slug> — cannot
unpublish/revert") or return a controlled failure instead of calling
showNode/getNodeInfo, so you avoid opaque runtime exceptions and ensure
_updateRef/commit steps only run when draftSha is present.
In `@src/lib/ContentStatePolicy.js`:
- Around line 42-47: resolveEffectiveState currently falls through to
STATES.DRAFT for any unknown draftStatus; change it to validate draftStatus
(allowed: STATES.DRAFT, STATES.UNPUBLISHED, STATES.REVERTED, or null/undefined)
and if pubSha is truthy still return STATES.PUBLISHED, but if draftStatus is
present and not one of the allowed values throw a CmsValidationError containing
the invalid value and a short message; update the function resolveEffectiveState
to perform this check before returning DRAFT so garbage inputs like "banana"
cause a clear validation error rather than silently becoming DRAFT.
- Around line 28-33: The TRANSITIONS constant is currently Object.freeze'd but
still contains mutable Set instances (TRANSITIONS), allowing callers to mutate
state like TRANSITIONS[STATES.DRAFT].add(...); fix by replacing the mutable Sets
with truly immutable collections or defensive wrappers: either replace each new
Set(...) with a frozen array (e.g., [STATES.DRAFT, ...]) and update
validateTransition (or any consumer using TRANSITIONS) to use .includes()
instead of .has(), or create an immutable wrapper around Set (e.g.,
makeImmutableSet that returns a Proxy which throws on mutation or return an
object exposing only a .has/read method) and use that for TRANSITIONS so runtime
mutation attempts fail; ensure you reference TRANSITIONS and validateTransition
(or any code that calls TRANSITIONS[...].has) and update those call sites
accordingly.
In `@test/git-e2e.test.js`:
- Around line 43-51: The test 'propagates underlying git errors while listing'
monkey-patches cms.plumbing.execute but restores it only after the assertion,
which can leak the patch if the assertion throws; wrap the monkey-patch and
assertion in a try/finally so cms.plumbing.execute is always restored, i.e.,
assign originalExecute, set cms.plumbing.execute to the throwing stub, then run
the await expect(cms.listArticles()).rejects... inside the try and restore
cms.plumbing.execute = originalExecute in finally to guarantee cleanup.
In `@test/git.test.js`:
- Around line 175-188: The test currently calls cms.revertArticle twice;
consolidate into a single invocation that captures the thrown error and asserts
all expectations (message, instance, and code) once. Replace the first await
expect(...).rejects.toThrow(...) and the subsequent duplicate try/catch with one
try/catch that awaits cms.revertArticle({ slug: 'sm-no-parent' }), catches the
error, asserts the error message contains "no parent commit exists" (or use a
regex), asserts the error is an instance of CmsValidationError, and asserts
err.code === 'revert_no_parent'. This keeps cms.revertArticle,
CmsValidationError and the 'revert_no_parent' code checks intact while removing
the redundant second call.
- Around line 190-227: Replace the manual try/catch + expect.fail pattern in the
three tests ("cannot unpublish a draft", "cannot revert a published article",
"cannot publish a reverted article") with Vitest's promise rejection matchers:
call the async CMS method inside expect(...).rejects and assert the error type
and code (e.g., toBeInstanceOf(CmsValidationError) and
toHaveProperty('code','invalid_state_transition')), targeting the
cms.unpublishArticle, cms.revertArticle, and cms.publishArticle calls
respectively so the tests are shorter and use the built-in rejects matcher for
failure assertions.
In `@test/server.test.js`:
- Around line 167-211: Both the "unpublishes an article via POST
/api/cms/unpublish" and "reverts an article via POST /api/cms/revert" tests
perform setup fetches to /api/cms/snapshot and /api/cms/publish without
asserting their results; add assertions immediately after each setup fetch
(e.g., check res.ok or expect(res.status).toBe(200)) so silent failures in the
setup (snapshot/publish) surface clearly; update the two tests' initial fetch
calls to capture the response and assert success before proceeding to the final
unpublish/revert request.
In `@vitest.config.js`:
- Line 15: Update the exclude pattern in the vitest config's exclude array:
replace the literal 'test/git-e2e*' with a future-proof glob such as
'test/git-e2e**' so it will exclude both files like test/git-e2e.test.js and any
potential test/git-e2e/ directory contents; modify the value inside the exclude
array where the symbol exclude is defined.
- Around line 8-13: The vitest alias hardcodes the internal path to
InMemoryGraphAdapter which breaks tests and is unmaintainable; open package.json
of `@git-stunts/git-warp` and check its "exports" or main entry to see if
InMemoryGraphAdapter is publicly exported, then remove the long resolve alias in
vitest.config.js and import the adapter via the package's public export (e.g.,
import from '@git-stunts/git-warp/InMemoryGraphAdapter' or the documented
symbol) or, if it isn't exported, revert the alias and instead create an
upstream issue/PR requesting a public export for InMemoryGraphAdapter and update
test/git.test.js to rely only on public exports.
| resolve: { | ||
| alias: { | ||
| '@git-stunts/git-warp/InMemoryGraphAdapter': | ||
| resolve(__dirname, 'node_modules/@git-stunts/git-warp/src/infrastructure/adapters/InMemoryGraphAdapter.js'), | ||
| }, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the `@git-stunts/git-warp` package exists and examine its package.json
if [ -d "node_modules/@git-stunts/git-warp" ]; then
echo "=== Package found, checking exports field ==="
cat node_modules/@git-stunts/git-warp/package.json | jq '.exports // "no exports field defined"'
echo ""
echo "=== Full package.json (main, types, exports) ==="
cat node_modules/@git-stunts/git-warp/package.json | jq '{main, types, exports, version}'
else
echo "Package `@git-stunts/git-warp` NOT found in node_modules"
fiRepository: flyingrobots/git-cms
Length of output: 118
🏁 Script executed:
# Check the actual structure of the package to see if the path exists
if [ -d "node_modules/@git-stunts/git-warp" ]; then
echo "=== Directory structure of `@git-stunts/git-warp` ==="
find node_modules/@git-stunts/git-warp -type f -name "*.js" | grep -i "memorygraph\|InMemory" | head -20
echo ""
echo "=== Check if src/infrastructure/adapters/InMemoryGraphAdapter.js exists ==="
ls -la node_modules/@git-stunts/git-warp/src/infrastructure/adapters/InMemoryGraphAdapter.js 2>&1 || echo "File not found"
fiRepository: flyingrobots/git-cms
Length of output: 46
🏁 Script executed:
# Check the main vitest.config.js and see how the alias is used
echo "=== vitest.config.js content ==="
cat vitest.config.jsRepository: flyingrobots/git-cms
Length of output: 628
🏁 Script executed:
# Search for usage of InMemoryGraphAdapter in the codebase
echo "=== Searching for InMemoryGraphAdapter usage ==="
rg "InMemoryGraphAdapter" --type js --type tsRepository: flyingrobots/git-cms
Length of output: 643
Hardcoded internal module path is unmaintainable — and the package isn't even installed.
The alias reaches into node_modules/@git-stunts/git-warp/src/infrastructure/adapters/InMemoryGraphAdapter.js, but the package itself is not installed in your current environment. This already causes the import in test/git.test.js to fail at runtime. Beyond that immediate breakage, hardcoding paths to package internals is a maintenance disaster:
- Any restructuring of
@git-stunts/git-warp(moving files, renaming directories) breaks your tests silently — no compile error, just a runtime module-not-found that's hell to debug. - You're depending on implementation details with zero guarantees of stability.
- This pattern does not scale.
Fix this by using public exports:
- Check if
@git-stunts/git-warpalready exportsInMemoryGraphAdaptervia itsexportsfield inpackage.json. - If yes, use that:
'@git-stunts/git-warp/InMemoryGraphAdapter'→ resolves through the package's own export map (no alias needed, or a minimal one). - If no, open an issue/PR upstream requesting a public export for this adapter instead of reaching into
src/.
Do not ship code that depends on internal node_modules paths. Period.
🤖 Prompt for AI Agents
In `@vitest.config.js` around lines 8 - 13, The vitest alias hardcodes the
internal path to InMemoryGraphAdapter which breaks tests and is unmaintainable;
open package.json of `@git-stunts/git-warp` and check its "exports" or main entry
to see if InMemoryGraphAdapter is publicly exported, then remove the long
resolve alias in vitest.config.js and import the adapter via the package's
public export (e.g., import from '@git-stunts/git-warp/InMemoryGraphAdapter' or
the documented symbol) or, if it isn't exported, revert the alias and instead
create an upstream issue/PR requesting a public export for InMemoryGraphAdapter
and update test/git.test.js to rely only on public exports.
Rename alias to #test/InMemoryGraphAdapter to signal it's a test-only shim. Add TODO noting the upstream gap (not in git-warp's exports map as of v10.8.0). Simplify URL resolution.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@test/git.test.js`:
- Around line 92-99: Add explicit negative test cases in the "State Machine"
suite to cover the remaining invalid transitions: reverted → unpublish, reverted
→ revert, unpublished → unpublish, and unpublished → revert. For each case, use
the existing CmsService instance (cms) initialized in beforeEach, drive the
state into the starting state (e.g., call cms.revert() or cms.unpublish() as
needed) and then attempt the invalid transition by calling the appropriate
method (cms.unpublish() or cms.revert()), asserting that the call fails
(throw/rejects or returns an error/status) consistent with other
invalid-transition tests; name each spec clearly (e.g., "does not allow reverted
-> unpublish") so the behavior is locked down.
- Around line 10-13: The test suites duplicate the same beforeEach that
constructs new InMemoryGraphAdapter() and cms = new CmsService({ refPrefix,
graph });; refactor by moving that setup into a shared parent describe or a
small factory helper: create a top-level describe wrapping the "CmsService
(Integration)" and "State Machine" describes and place the beforeEach there
(using InMemoryGraphAdapter and CmsService symbols), or extract a helper
function (e.g., makeCms()) that returns { graph, cms } and call it from both
suites to remove the duplicated beforeEach code.
- Around line 193-199: The tests use expect.fail() which Vitest doesn't provide;
update the failing guards in the try/catch blocks that call cms.unpublishArticle
(and the other similar test cases in this file) to either throw a plain
Error('should have thrown') or import Vitest's assert and call
assert.fail('should have thrown'); if you choose assert, add "assert" to the
existing import from 'vitest'; replace every occurrence of expect.fail('should
have thrown') in these tests with the chosen alternative so the test fails
explicitly when no error is thrown.
| describe('State Machine', () => { | ||
| let cms; | ||
| const refPrefix = 'refs/cms'; | ||
|
|
||
| beforeEach(() => { | ||
| const graph = new InMemoryGraphAdapter(); | ||
| cms = new CmsService({ refPrefix, graph }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding coverage for remaining invalid transitions.
The suite covers the core happy paths and three invalid transitions well. A few edge cases remain untested:
reverted → unpublish(presumably invalid)reverted → revert(double revert)unpublished → unpublish(double unpublish)unpublished → revert
These may be implicitly prevented by the state machine, but explicit negative tests would lock down the contract.
🤖 Prompt for AI Agents
In `@test/git.test.js` around lines 92 - 99, Add explicit negative test cases in
the "State Machine" suite to cover the remaining invalid transitions: reverted →
unpublish, reverted → revert, unpublished → unpublish, and unpublished → revert.
For each case, use the existing CmsService instance (cms) initialized in
beforeEach, drive the state into the starting state (e.g., call cms.revert() or
cms.unpublish() as needed) and then attempt the invalid transition by calling
the appropriate method (cms.unpublish() or cms.revert()), asserting that the
call fails (throw/rejects or returns an error/status) consistent with other
invalid-transition tests; name each spec clearly (e.g., "does not allow reverted
-> unpublish") so the behavior is locked down.
…nup) Fixes 20 actionable items from CodeRabbit/Codex review: DOMPurify for stored XSS, unpublish atomicity reorder, null guards, SRI hashes, unknown-status throw, Promise.all for N+1, and test improvements.
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Line 34: Update the changelog entry so the user-facing phrase uses the proper
noun capitalization: change "markdown preview" to "Markdown preview" in the line
that reads "- **(P1) Stored XSS via markdown preview:** Sanitize
`marked.parse()` output with DOMPurify" (referencing the string containing
`marked.parse()` and `DOMPurify` to locate the exact sentence).
- Around line 5-9: Add a blank line before each second-level heading in
CHANGELOG.md that currently lacks surrounding whitespace: specifically insert an
empty line immediately above the "### Added", "### Changed", and "### Fixed"
headings so they comply with markdownlint MD022; update the three occurrences of
the "### Added"/"### Changed"/"### Fixed" headings in the Unreleased section to
be preceded by a blank line.
In `@public/index.html`:
- Around line 13-16: The stylesheet imports in the <style> block use `@import` of
Open Props from unpkg (the three `@import` lines) which cannot use SRI and poses a
supply-chain risk; replace these remote `@import` calls by vendoring the Open
Props files into the repo (download the referenced files: open-props,
normalize.min.css, buttons.min.css), serve them from your static assets, and
update the <style> block to reference the local CSS files (or link them via
local <link rel="stylesheet">) so the imports no longer rely on unpkg and SRI
isn’t required.
- Around line 906-908: The escAttr function currently only escapes &, ", and <;
update escAttr to also replace single quote characters by mapping ' to ' to
prevent injection into single-quoted attributes—modify the escAttr
implementation (the escAttr(s) function) to include .replace(/'/g, ''') in
the replacement chain so all four characters (&, ", <, ') are escaped.
- Around line 7-12: The marked script src uses a floating major tag
(`marked@15`) which will break SRI on any v15 patch update; update the script
tag to pin the exact version that matches the provided integrity hash (e.g.,
replace `marked@15` with the exact release such as `marked@15.0.12`) in the src
URL and keep the existing integrity attribute (or regenerate the integrity hash
to match the chosen exact version), ensuring the `marked` import used by your
markdown preview (calls like marked.parse) continues to load correctly.
In `@src/lib/CmsService.js`:
- Around line 89-105: The method _resolveArticleState currently treats missing
refs as a draft because draftStatus defaults to STATES.DRAFT; add an existence
guard at the start of _resolveArticleState to check both draftSha and pubSha and
throw a CmsValidationError when both are falsy (include code 'article_not_found'
and field 'slug') so callers querying a nonexistent slug get a clear error
instead of state 'draft'; keep the rest of the function (reading message,
decoding, and calling resolveEffectiveState) unchanged and only proceed when at
least one of draftSha or pubSha exists.
- Around line 237-241: The trailers object in unpublishArticle and revertArticle
uses the wrong key casing 'updatedat' causing UI badges to miss it; update both
occurrences in the codec.encode calls inside unpublishArticle and revertArticle
to use 'updatedAt' (matching saveSnapshot and renderBadges expectations), and
scan for any other places writing trailers to ensure they use camelCase
'updatedAt' consistently.
| ## [Unreleased] — git-stunts branch | ||
|
|
||
| ### Added | ||
| - **Content Identity Policy (M1.1):** Canonical slug validation with NFKC normalization, reserved word rejection, and `CmsValidationError` contract (`ContentIdentityPolicy.js`) | ||
| - **State Machine (M1.2):** Explicit draft/published/unpublished/reverted states with enforced transition rules (`ContentStatePolicy.js`) |
There was a problem hiding this comment.
Markdownlint MD022: headings need surrounding blank lines.
Lines 7, 22, and 29 (### Added, ### Changed, ### Fixed) each lack a blank line before them. This will trigger MD022 in CI if markdownlint is enforced.
Proposed fix (showing line 5–9; repeat for 21–22 and 28–29)
## [Unreleased] — git-stunts branch
### Added
+
- **Content Identity Policy (M1.1):** Canonical slug validation with NFKC normalization, reserved word rejection, and `CmsValidationError` contract (`ContentIdentityPolicy.js`)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## [Unreleased] — git-stunts branch | |
| ### Added | |
| - **Content Identity Policy (M1.1):** Canonical slug validation with NFKC normalization, reserved word rejection, and `CmsValidationError` contract (`ContentIdentityPolicy.js`) | |
| - **State Machine (M1.2):** Explicit draft/published/unpublished/reverted states with enforced transition rules (`ContentStatePolicy.js`) | |
| ## [Unreleased] — git-stunts branch | |
| ### Added | |
| - **Content Identity Policy (M1.1):** Canonical slug validation with NFKC normalization, reserved word rejection, and `CmsValidationError` contract (`ContentIdentityPolicy.js`) | |
| - **State Machine (M1.2):** Explicit draft/published/unpublished/reverted states with enforced transition rules (`ContentStatePolicy.js`) |
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)
[warning] 7-7: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
In `@CHANGELOG.md` around lines 5 - 9, Add a blank line before each second-level
heading in CHANGELOG.md that currently lacks surrounding whitespace:
specifically insert an empty line immediately above the "### Added", "###
Changed", and "### Fixed" headings so they comply with markdownlint MD022;
update the three occurrences of the "### Added"/"### Changed"/"### Fixed"
headings in the Unreleased section to be preceded by a blank line.
| - Slug canonicalization enforced at all API ingress points | ||
| - Admin UI API calls aligned with server contract (query params, response shapes) | ||
| - Server integration test environment stabilized for CI | ||
| - **(P1) Stored XSS via markdown preview:** Sanitize `marked.parse()` output with DOMPurify |
There was a problem hiding this comment.
"markdown" is a proper noun — capitalize it.
markdown preview → Markdown preview. Flagged by LanguageTool; tiny but this is a user-facing changelog.
-- **(P1) Stored XSS via markdown preview:** Sanitize `marked.parse()` output with DOMPurify
+- **(P1) Stored XSS via Markdown preview:** Sanitize `marked.parse()` output with DOMPurify📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **(P1) Stored XSS via markdown preview:** Sanitize `marked.parse()` output with DOMPurify | |
| - **(P1) Stored XSS via Markdown preview:** Sanitize `marked.parse()` output with DOMPurify |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~34-~34: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ...abilized for CI - (P1) Stored XSS via markdown preview: Sanitize marked.parse() ou...
(MARKDOWN_NNP)
🤖 Prompt for AI Agents
In `@CHANGELOG.md` at line 34, Update the changelog entry so the user-facing
phrase uses the proper noun capitalization: change "markdown preview" to
"Markdown preview" in the line that reads "- **(P1) Stored XSS via markdown
preview:** Sanitize `marked.parse()` output with DOMPurify" (referencing the
string containing `marked.parse()` and `DOMPurify` to locate the exact
sentence).
| <script src="https://cdn.jsdelivr.net/npm/marked@15/marked.min.js" | ||
| integrity="sha384-948ahk4ZmxYVYOc+rxN1H2gM1EJ2Duhp7uHtZ4WSLkV4Vtx5MUqnV+l7u9B+jFv+" | ||
| crossorigin="anonymous"></script> | ||
| <script src="https://cdn.jsdelivr.net/npm/dompurify@3.2.4/dist/purify.min.js" | ||
| integrity="sha384-eEu5CTj3qGvu9PdJuS+YlkNi7d2XxQROAFYOr59zgObtlcux1ae1Il3u7jvdCSWu" | ||
| crossorigin="anonymous"></script> |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
latest version marked.js npm package 2025
💡 Result:
The latest marked (marked.js) npm package version released in 2025 was v17.0.1, published November 20, 2025. [1] (Also shown as the latest version on jsDelivr’s npm view.) [2]
🌐 Web query:
marked.js v15 latest version npm
💡 Result:
The latest marked.js v15 release on npm is 15.0.12. [1]
npm i marked@15.0.12🌐 Web query:
marked 15.0.12 SRI hash sha384-948ahk4ZmxYVYOc+rxN1H2gM1EJ2Duhp7uHtZ4WSLkV4Vtx5MUqnV+l7u9B+jFv+
💡 Result:
That string is in Subresource Integrity (SRI) format: an integrity attribute value using SHA-384 (sha384-…) to pin a specific CDN file (e.g., marked.min.js) so the browser will block it if the contents don’t match. Marked 15.0.12 is a real published version on cdnjs. [1] cdnjs also documents that SRI hashes may be provided as sri values in API responses for assets. [2]
Example usage:
<script
src="https://cdnjs.cloudflare.com/ajax/libs/marked/15.0.12/marked.min.js"
integrity="sha384-948ahk4ZmxYVYOc+rxN1H2gM1EJ2Duhp7uHtZ4WSLkV4Vtx5MUqnV+l7u9B+jFv+"
crossorigin="anonymous"></script>To verify/generate the SHA-384 SRI for a downloaded file:
openssl dgst -sha384 -binary marked.min.js | openssl base64 -A
# then prefix with: sha384-Sources: [1] [2]
🏁 Script executed:
cat public/index.html | head -20Repository: flyingrobots/git-cms
Length of output: 899
marked@15 is a floating major-version tag—SRI hash will shatter on the next v15 patch release.
The URL https://cdn.jsdelivr.net/npm/marked@15/marked.min.js resolves to whatever the latest v15.x.y is on the CDN. The SRI hash you've provided (sha384-948ahk4...) matches 15.0.12 exactly. The moment 15.0.13 lands, the CDN serves new minified bytes, the integrity check fails, the browser rejects the script, and marked becomes undefined. Your markdown preview feature dies silently—no error, no fallback, just a null reference crash when the code calls marked.parse().
DOMPurify is correctly pinned to @3.2.4—an exact version. Do the same for marked.
Fix
- <script src="https://cdn.jsdelivr.net/npm/marked@15/marked.min.js"
- integrity="sha384-948ahk4ZmxYVYOc+rxN1H2gM1EJ2Duhp7uHtZ4WSLkV4Vtx5MUqnV+l7u9B+jFv+"
+ <script src="https://cdn.jsdelivr.net/npm/marked@15.0.12/marked.min.js"
+ integrity="sha384-948ahk4ZmxYVYOc+rxN1H2gM1EJ2Duhp7uHtZ4WSLkV4Vtx5MUqnV+l7u9B+jFv+"
crossorigin="anonymous"></script>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <script src="https://cdn.jsdelivr.net/npm/marked@15/marked.min.js" | |
| integrity="sha384-948ahk4ZmxYVYOc+rxN1H2gM1EJ2Duhp7uHtZ4WSLkV4Vtx5MUqnV+l7u9B+jFv+" | |
| crossorigin="anonymous"></script> | |
| <script src="https://cdn.jsdelivr.net/npm/dompurify@3.2.4/dist/purify.min.js" | |
| integrity="sha384-eEu5CTj3qGvu9PdJuS+YlkNi7d2XxQROAFYOr59zgObtlcux1ae1Il3u7jvdCSWu" | |
| crossorigin="anonymous"></script> | |
| <script src="https://cdn.jsdelivr.net/npm/marked@15.0.12/marked.min.js" | |
| integrity="sha384-948ahk4ZmxYVYOc+rxN1H2gM1EJ2Duhp7uHtZ4WSLkV4Vtx5MUqnV+l7u9B+jFv+" | |
| crossorigin="anonymous"></script> | |
| <script src="https://cdn.jsdelivr.net/npm/dompurify@3.2.4/dist/purify.min.js" | |
| integrity="sha384-eEu5CTj3qGvu9PdJuS+YlkNi7d2XxQROAFYOr59zgObtlcux1ae1Il3u7jvdCSWu" | |
| crossorigin="anonymous"></script> |
🤖 Prompt for AI Agents
In `@public/index.html` around lines 7 - 12, The marked script src uses a floating
major tag (`marked@15`) which will break SRI on any v15 patch update; update the
script tag to pin the exact version that matches the provided integrity hash
(e.g., replace `marked@15` with the exact release such as `marked@15.0.12`) in
the src URL and keep the existing integrity attribute (or regenerate the
integrity hash to match the chosen exact version), ensuring the `marked` import
used by your markdown preview (calls like marked.parse) continues to load
correctly.
Freeze TRANSITIONS Sets to prevent mutation, add manual CAS check in DI-mode _updateRef, widen vitest exclude glob, add negative transition tests (reverted→unpublish/revert, unpublished→revert), assert server test setup calls, fix CHANGELOG markdown lint and add [Unreleased] link.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/server.test.js (1)
27-32: 🧹 Nitpick | 🔵 TrivialThe env var save/restore boilerplate is screaming for extraction.
Six env vars × 2 operations (save + restore with undefined check) = 36 lines of pure ceremony. This is a maintenance hazard — add a seventh env var and you'll inevitably forget one branch.
♻️ Proposed helper
+ const ENV_KEYS = [ + 'GIT_CMS_REPO', 'PORT', + 'GIT_AUTHOR_NAME', 'GIT_AUTHOR_EMAIL', + 'GIT_COMMITTER_NAME', 'GIT_COMMITTER_EMAIL', + ]; + let savedEnv; beforeAll(async () => { cwd = mkdtempSync(path.join(os.tmpdir(), 'git-cms-server-api-test-')); execFileSync('git', ['init'], { cwd }); execFileSync('git', ['config', 'user.name', 'Test'], { cwd }); execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd }); - previousRepoEnv = process.env.GIT_CMS_REPO; - previousPortEnv = process.env.PORT; - previousAuthorNameEnv = process.env.GIT_AUTHOR_NAME; - previousAuthorEmailEnv = process.env.GIT_AUTHOR_EMAIL; - previousCommitterNameEnv = process.env.GIT_COMMITTER_NAME; - previousCommitterEmailEnv = process.env.GIT_COMMITTER_EMAIL; + savedEnv = Object.fromEntries(ENV_KEYS.map(k => [k, process.env[k]])); process.env.GIT_CMS_REPO = cwd; // ... remaining assignments unchanged }); afterAll(async () => { // ... server/symlink cleanup unchanged ... - if (previousRepoEnv === undefined) { delete process.env.GIT_CMS_REPO; } else { ... } - // ... repeat ×6 ... + for (const [k, v] of Object.entries(savedEnv)) { + if (v === undefined) delete process.env[k]; + else process.env[k] = v; + } rmSync(cwd, { recursive: true, force: true }); });Also applies to: 72-101
…r XSS, vendor CSS - Use camelCase `updatedAt` in unpublishArticle/revertArticle to match saveSnapshot and renderBadges; destructure out decoded lowercase key to avoid duplicate trailer errors - Add article_not_found guard in _resolveArticleState when both refs are missing - Escape single quotes in escAttr to prevent attribute injection - Vendor Open Props CSS locally instead of @import from unpkg
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/lib/CmsService.js (2)
178-206: 🧹 Nitpick | 🔵 Trivial
saveSnapshottransition guard is clean but allows first-save without validation.Line 185 only validates transitions when
parentShais truthy (i.e., article already exists). This correctly permits creating a brand-new draft. Thestatus: 'draft'literal on line 193 is consistent withSTATES.DRAFT— consider using the constant for consistency, though this is minor.📝 Suggested
const finalTrailers = { ...safeTrailers, contentid: identity.contentId, - status: 'draft', + status: STATES.DRAFT, updatedAt: new Date().toISOString(), };
348-353:⚠️ Potential issue | 🟡 Minor
uploadAssetref update has no CAS — concurrent uploads silently overwrite.Line 353 calls
_updateRef({ ref, newSha: commitSha })withoutoldSha. In_updateRef, this skips the CAS check entirely, meaning two concurrentuploadAssetcalls for the same slug can race and the last write wins silently. If this is acceptable for thechunks/namespace, a brief comment would prevent future confusion.
🤖 Fix all issues with AI agents
In `@public/css/normalize.min.css`:
- Line 1: The vendored minified Open Props CSS (public/css/normalize.min.css)
should be excluded from linting rather than modified; update your linters to
ignore public/css/*.min.css by adding that glob to Stylelint's ignoreFiles (or
equivalent .stylelintrc / stylelint.config) and to Biome's exclusion settings
(add the glob to biome.json/biome.jsonc lint/file exclude list), then re-run the
lint job to verify the file is no longer reported.
In `@public/index.html`:
- Around line 875-884: markDirty() clears and sets the autosave timer only when
state.saving is false, so if a save was in-flight edits become orphaned; update
the finally block of save() to re-arm autosave when appropriate: after setting
state.saving = false (in save's finally) check if state.dirty is true and
state.currentSlug is set and there is no existing autosave timer, then call the
same logic used in markDirty() (clearAutosave() then set state.autosaveTimer =
setTimeout(() => this.save(), 3000)) so in-flight saves that completed while the
user typed will schedule a follow-up autosave; reference save(), markDirty(),
state.dirty, state.saving, state.autosaveTimer, state.currentSlug, and
clearAutosave().
- Around line 778-826: The duplicated systemKeys array in renderTrailers and
collectTrailers should be extracted into a single shared constant to avoid
divergence: add a top-level constant (e.g. SYSTEM_TRAILER_KEYS =
['status','Status','updated-at','Updated-at','updatedAt']) in the same script
scope and replace the local systemKeys references inside renderTrailers and
collectTrailers with SYSTEM_TRAILER_KEYS so both functions read from the single
source of truth.
- Line 73: The CSS rule for "aside header h1" uses the deprecated clip:
rect(0,0,0,0); replace it with the modern clip-path method and vendor-prefixed
fallback: remove the clip declaration and add clip-path: inset(50%); (and
-webkit-clip-path: inset(50%) for broader support) while keeping the existing
position, width, height, overflow and white-space properties; ensure any
visually-hidden pattern also includes border: 0 and padding: 0 if not already
present to match the accessible hiding pattern.
- Around line 586-611: fetchList() currently leaves state.articles populated
when api.list() fails, causing stale data to be referenced; in the catch block
clear state.articles (e.g., set state.articles = []) before clearing
ul.innerHTML and updating any footer/toast so the UI and any consumers of
state.articles reflect the failed load; locate the fetchList method and add the
state.articles reset in its catch branch (referencing fetchList, state.articles,
and api.list).
- Around line 553-559: updatePreview currently calls marked.parse and
DOMPurify.sanitize unguarded, causing ReferenceError if those CDN scripts fail;
wrap the parsing/sanitizing call in a try/catch and guard for the existence of
marked and DOMPurify (check typeof marked?.parse and typeof DOMPurify?.sanitize)
inside updatePreview (still using previewTimer debounce), and on failure fall
back to a safe no-op rendering like setting previewPane.textContent = body (or a
safely escaped string) and log the error to console; update references in the
function to use document.getElementById('bodyInput') and 'previewPane' as before
but branch to the fallback when marked.parse or DOMPurify.sanitize are
unavailable or throw.
- Around line 499-503: The api.list method does not URL-encode the kind
parameter before building the request URL, so update the async list(kind)
function to encode kind (e.g., using encodeURIComponent) when constructing
fetch(`${API_BASE}/list?kind=${...}`) to match how slug is handled elsewhere;
ensure you only change the URL interpolation for kind in api.list and keep error
handling and return res.json() intact.
- Around line 498-539: The API methods (api.list, api.show, api.snapshot,
api.publish, api.upload) currently throw generic errors using only res.status
and discard the server's structured error payload; update each method to, on
!res.ok, await res.json() (or safe-parse the body) and include the returned
structured error fields (e.g., code, field, message) in the thrown error (or
throw a new Error with a combined message/object) so callers receive actionable
feedback from the server instead of just the HTTP status.
- Around line 928-930: The Escape key handler currently calls UI.deselect() when
document.getElementById('editorPanel').contains(document.activeElement), which
still triggers while typing in the editor textarea; change the condition so
Escape is ignored when focus is inside form controls or editable elements.
Update the key handler (the block referencing
document.getElementById('editorPanel') and UI.deselect()) to first test
document.activeElement and skip deselect if activeElement.matches('input,
textarea, select, [contenteditable="true"]') or if activeElement.tagName is one
of those, otherwise call UI.deselect(); keep the existing contains(...) check so
non-input focus inside editor still allows the shortcut. Ensure the new check
covers textareas, inputs, selects and contentEditable elements.
- Around line 674-709: The save() logic currently clears state.dirty only after
awaiting api.snapshot, which loses any edits made while the save was in-flight;
fix by setting state.dirty = false before the await (so subsequent markDirty()
calls can re-set it) and in the finally block, if state.dirty is true, re-arm
the autosave timer by invoking the existing autosave starter (e.g., call
markDirty() or the function that schedules autosave) instead of leaving autosave
disabled; reference save(), markDirty(), state.dirty and clearAutosave() when
making this change.
In `@src/lib/CmsService.js`:
- Around line 250-258: The destructuring removes the decoded.trailers key named
"updatedat" (lowercase) before re-adding a canonical "updatedAt" (camelCase);
add a concise inline comment above the destructuring in the block that creates
newMessage (the decoded.trailers destructuring that extracts updatedat)
explaining that the trailer codec decodes keys to lowercase, so we intentionally
strip "updatedat" and reintroduce "updatedAt" when encoding the new message, and
mirror the same explanatory comment in the corresponding destructuring in
revertArticle to avoid confusion for future readers.
- Around line 299-302: The destructuring uses the lowercase alias pattern
`updatedat: _u` which reduces readability; in the block handling
parentDecoded.trailers (around parentCommitSha, parentMessage, parentDecoded)
change the alias to a clearer name (e.g. `updatedat: _updatedAt` or `updatedat:
updatedAt`) instead of `_u`, keeping the original property key (`updatedat`) but
mapping it to a readable local identifier, and leave the rest spread as
`restParentTrailers`.
In `@test/git.test.js`:
- Around line 229-260: Add a new negative test verifying double-unpublish is
rejected: create an article via cms.saveSnapshot, publish it with
cms.publishArticle, unpublish it with cms.unpublishArticle, then assert that a
second call to cms.unpublishArticle rejects with CmsValidationError / code
'invalid_state_transition' (mirror the style of the surrounding tests like the
existing reverted/unpublish and reverted/revert tests to keep consistency).
| @@ -0,0 +1 @@ | |||
| :where(html){--csstools-color-scheme--light:initial;--link:var(--indigo-7);--link-visited:var(--purple-7);--text-1:var(--gray-12);--text-2:var(--gray-7);--surface-1:var(--gray-0);--surface-2:var(--gray-2);--surface-3:var(--gray-3);--surface-4:var(--gray-4);--scrollthumb-color:var(--gray-7);-webkit-text-size-adjust:none;accent-color:var(--brand,var(--link));background-color:var(--surface-1);block-size:100%;caret-color:var(--brand,var(--link));color:var(--text-1);color-scheme:light;font-family:var(--font-system-ui);line-height:var(--font-lineheight-3);scrollbar-color:var(--scrollthumb-color) transparent}@media (dynamic-range:high) or (color-gamut:p3){@supports (color:color(display-p3 0 0 0)){:where(html){--link:color(display-p3 .1 .39 1);--link-visited:color(display-p3 .6 .2 1)}}}:where(html) :where(dialog){background-color:var(--surface-1)}:where(html) :where(button,.btn){--_highlight:var(--_highlight-light);--_bg:var(--_bg-light);--_ink-shadow:var(--_ink-shadow-light)}:where(html) :where(button,.btn) :where([type=reset]){--_text:var(--red-6);--_border:var(--red-3)}:where(html) [disabled]:where(button,input[type=button],.btn){--_text:var(--gray-6)}:where(html) [disabled]:where(button,input[type=submit],.btn){--_text:var(--gray-6)}:where(html) [disabled]:where(button,input[type=reset],.btn){--_text:var(--gray-6)}:where(html) :where(textarea,select,input:not([type=button],[type=submit],[type=reset])){background-color:var(--surface-2)}@media (prefers-color-scheme:dark){:where(html){--csstools-color-scheme--light: ;--link:var(--indigo-3);--link-visited:var(--purple-3);--text-1:var(--gray-1);--text-2:var(--gray-4);--surface-1:var(--gray-9);--surface-2:var(--gray-8);--surface-3:var(--gray-7);--surface-4:var(--gray-6);--scrollthumb-color:var(--gray-6);--shadow-strength:10%;--shadow-color:220 40% 2%;color-scheme:dark}}:where(a[href]){color:var(--brand,var(--link))}:where(a[href]):where(:visited){color:var(--link-visited)}:focus-visible{outline-color:var(--brand,var(--link))}*,:after,:before{box-sizing:border-box}:where(:not(dialog)){margin:0}:where(:not(fieldset,progress,meter)){background-origin:border-box;background-repeat:no-repeat;border-style:solid;border-width:0}@media (prefers-reduced-motion:no-preference){:where(html){scroll-behavior:smooth}}@media (prefers-reduced-motion:no-preference){:where(:focus-visible){transition:outline-offset 145ms var(--ease-2)}:where(:not(:active):focus-visible){transition-duration:.25s}}:where(:not(:active):focus-visible){outline-offset:5px}:where(body){min-block-size:100%}:where(h1,h2,h3,h4,h5,h6){text-wrap:balance;font-weight:var(--font-weight-9);line-height:var(--font-lineheight-1)}:where(h1){font-size:var(--font-size-8);max-inline-size:var(--size-header-1)}:where(h2){font-size:var(--font-size-6);max-inline-size:var(--size-header-2)}:where(h3){font-size:var(--font-size-5)}:where(h4){font-size:var(--font-size-4)}:where(h5){font-size:var(--font-size-3)}:where(h3,h4,h5,h6,dt){max-inline-size:var(--size-header-3)}:where(p,ul,ol,dl,h6){font-size:var(--font-size-2)}:where(a,u,ins,abbr){text-underline-offset:1px}@supports (-moz-appearance:none){:where(a,u,ins,abbr){text-underline-offset:2px}}:where(a[href],area,button,input:not([type=text],[type=email],[type=number],[type=password],[type=""],[type=tel],[type=url]),label[for],select,summary,[tabindex]:not([tabindex*="-"],pre)){cursor:pointer}:where(a[href],area,button,input,label[for],select,summary,textarea,[tabindex]:not([tabindex*="-"])){-webkit-tap-highlight-color:transparent;touch-action:manipulation}:where(a):where([href]){text-decoration-color:var(--indigo-2)}:where(a):where([href]):where(:visited){text-decoration-color:var(--purple-2)}:where(a):where(:not(:hover)){text-decoration:inherit}:where(img,svg,video,canvas,audio,iframe,embed,object){display:block}:where(img,svg,video){block-size:auto;max-inline-size:100%}:where(input,button,textarea,select),:where(input[type=file])::-webkit-file-upload-button{color:inherit;font:inherit;font-size:inherit;letter-spacing:inherit}::placeholder{color:var(--gray-7);opacity:.75}:where(input:not([type=range]),textarea){padding-block:var(--size-1);padding-inline:var(--size-2)}:where(select){field-sizing:content;padding-block:.75ch;padding-inline:var(--size-relative-4) 0}:where(textarea,select,input:not([type=button],[type=submit],[type=reset])){background-color:var(--surface-2);border-radius:var(--radius-2)}:where(textarea){field-sizing:content;min-block-size:2lh;min-inline-size:var(--size-content-1);resize:vertical}:where(input[type=checkbox],input[type=radio]){block-size:var(--size-3);inline-size:var(--size-3)}:where(svg:not([width])){inline-size:var(--size-10)}:where(code,kbd,samp,pre){font-family:var(--font-monospace-code),monospace}:where(:not(pre)>code,kbd){white-space:nowrap}:where(pre){direction:ltr;max-inline-size:max-content;min-inline-size:0;white-space:pre;writing-mode:lr}:where(:not(pre)>code){background:var(--surface-2);border-radius:var(--radius-2);padding:var(--size-1) var(--size-2);writing-mode:lr}:where(kbd,var){border-color:var(--surface-4);border-radius:var(--radius-2);border-width:var(--border-size-1);padding:var(--size-1) var(--size-2)}:where(mark){border-radius:var(--radius-2);padding-inline:var(--size-1)}:where(ol,ul){padding-inline-start:var(--size-8)}:where(li){padding-inline-start:var(--size-2)}:where(li,dd,figcaption){max-inline-size:var(--size-content-2)}:where(p){text-wrap:pretty;max-inline-size:var(--size-content-3)}:where(dt,summary){font-weight:var(--font-weight-7)}:where(dt:not(:first-of-type)){margin-block-start:var(--size-5)}:where(small){font-size:max(.5em,var(--font-size-0));max-inline-size:var(--size-content-1)}:where(hr){background-color:var(--surface-3);height:var(--border-size-2);margin-block:var(--size-fluid-5)}:where(figure){display:grid;gap:var(--size-2);place-items:center}:where(figure)>:where(figcaption){text-wrap:balance;font-size:var(--font-size-1)}:where(blockquote,:not(blockquote)>cite){border-inline-start-width:var(--border-size-3)}:where(blockquote){display:grid;gap:var(--size-3);max-inline-size:var(--size-content-2);padding-block:var(--size-3);padding-inline:var(--size-4)}:where(:not(blockquote)>cite){padding-inline-start:var(--size-2)}:where(summary){background:var(--surface-3);border-radius:var(--radius-2);margin:calc(var(--size-2)*-1) calc(var(--size-3)*-1);padding:var(--size-2) var(--size-3)}:where(details){background:var(--surface-2);border-radius:var(--radius-2);padding-block:var(--size-2);padding-inline:var(--size-3)}:where(details[open]>summary){border-end-end-radius:0;border-end-start-radius:0;margin-bottom:var(--size-2)}:where(fieldset){border:var(--border-size-1) solid var(--surface-4);border-radius:var(--radius-2)}:where(del){background:var(--red-9);color:var(--red-2)}:where(ins){background:var(--green-9);color:var(--green-1)}:where(abbr){text-decoration-color:var(--blue-5)}:where(dialog){background-color:var(--surface-1);border-radius:var(--radius-3);box-shadow:var(--shadow-6);color:inherit}:where(menu){display:flex;gap:var(--size-3);padding-inline-start:0}:where(sup){font-size:.5em}:where(table){--nice-inner-radius:calc(var(--radius-3) - 2px);background:var(--surface-2);border:1px solid var(--surface-2);border-radius:var(--radius-3);width:fit-content}:where(table:not(:has(tfoot)) tr:last-child td:first-child){border-end-start-radius:var(--nice-inner-radius)}:where(table:not(:has(tfoot)) tr:last-child td:last-child){border-end-end-radius:var(--nice-inner-radius)}:where(table thead tr:first-child th:first-child){border-start-start-radius:var(--nice-inner-radius)}:where(table thead tr:first-child th:last-child){border-start-end-radius:var(--nice-inner-radius)}:where(tfoot tr:last-child th:first-of-type){border-end-start-radius:var(--nice-inner-radius)}:where(tfoot tr:last-child td:first-of-type){border-end-start-radius:var(--nice-inner-radius)}:where(tfoot tr:last-child th:last-of-type){border-end-end-radius:var(--nice-inner-radius)}:where(tfoot tr:last-child td:last-of-type){border-end-end-radius:var(--nice-inner-radius)}:where(th){background-color:var(--surface-2);color:var(--text-1)}:where(table a:not(.does-not-exist):focus-visible){outline-offset:-2px}:where(table button:not(.does-not-exist):focus-visible){outline-offset:-2px}:where(table [contenteditable]:focus-visible){outline-offset:-2px}:where(td){text-wrap:pretty;background:var(--surface-1);max-inline-size:var(--size-content-2)}:where(td,th){padding:var(--size-2);text-align:left}:where(td:not([align])){text-align:center}:where(th:not([align])){text-align:center}:where(thead){border-collapse:collapse}:where(table tr:hover td),:where(tbody tr:nth-child(2n):hover td){background-color:var(--surface-3)}:where(table>caption){margin:var(--size-3)}:where(tfoot button){padding-block:var(--size-1);padding-inline:var(--size-3)}@media (prefers-color-scheme:dark){:where(textarea,select,input:not([type=button],[type=submit],[type=reset])){background-color:#171a1c}:where(dialog){background-color:var(--surface-2)}::placeholder{color:var(--gray-6)}} No newline at end of file | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Vendored third-party CSS — exclude from linting.
This is a minified Open Props normalize asset. All static analysis errors (nonstandard gradient directions, value-keyword-case, keyframe-selector-notation) are upstream artifacts. Modifying vendored output is a maintenance trap. Configure Biome/Stylelint to ignore public/css/*.min.css instead.
#!/bin/bash
# Check if there's a stylelint config that could exclude vendored CSS
fd -t f '.stylelintrc' --exec cat {}
fd -t f 'stylelint.config' --exec cat {}
# Check biome config
fd -t f 'biome.json' --exec cat {}
fd -t f 'biome.jsonc' --exec cat {}🤖 Prompt for AI Agents
In `@public/css/normalize.min.css` at line 1, The vendored minified Open Props CSS
(public/css/normalize.min.css) should be excluded from linting rather than
modified; update your linters to ignore public/css/*.min.css by adding that glob
to Stylelint's ignoreFiles (or equivalent .stylelintrc / stylelint.config) and
to Biome's exclusion settings (add the glob to biome.json/biome.jsonc lint/file
exclude list), then re-run the lint job to verify the file is no longer
reported.
| padding: var(--size-5); | ||
| overflow-y: auto; | ||
| aside header .logo { display: block; height: 28px; width: auto; } | ||
| aside header h1 { position: absolute; width: 1px; height: 1px; overflow: hidden; clip: rect(0,0,0,0); white-space: nowrap; } |
There was a problem hiding this comment.
Deprecated clip property in visually-hidden pattern.
clip: rect(0,0,0,0) is deprecated. Replace with the modern equivalent:
- aside header h1 { position: absolute; width: 1px; height: 1px; overflow: hidden; clip: rect(0,0,0,0); white-space: nowrap; }
+ aside header h1 { position: absolute; width: 1px; height: 1px; overflow: hidden; clip-path: inset(50%); white-space: nowrap; }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| aside header h1 { position: absolute; width: 1px; height: 1px; overflow: hidden; clip: rect(0,0,0,0); white-space: nowrap; } | |
| aside header h1 { position: absolute; width: 1px; height: 1px; overflow: hidden; clip-path: inset(50%); white-space: nowrap; } |
🤖 Prompt for AI Agents
In `@public/index.html` at line 73, The CSS rule for "aside header h1" uses the
deprecated clip: rect(0,0,0,0); replace it with the modern clip-path method and
vendor-prefixed fallback: remove the clip declaration and add clip-path:
inset(50%); (and -webkit-clip-path: inset(50%) for broader support) while
keeping the existing position, width, height, overflow and white-space
properties; ensure any visually-hidden pattern also includes border: 0 and
padding: 0 if not already present to match the accessible hiding pattern.
| const api = { | ||
| async list(kind) { | ||
| const res = await fetch(`${API_BASE}/list?kind=${kind}`); | ||
| if (!res.ok) throw new Error(`List failed: ${res.status}`); | ||
| return res.json(); | ||
| }, | ||
|
|
||
| async show(slug, kind = 'articles') { | ||
| const res = await fetch(`${API_BASE}/show?slug=${encodeURIComponent(slug)}&kind=${kind}`); | ||
| if (!res.ok) throw new Error(`Show failed: ${res.status}`); | ||
| return res.json(); | ||
| }, | ||
|
|
||
| async snapshot({ slug, title, body, trailers }) { | ||
| const res = await fetch(`${API_BASE}/snapshot`, { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify({ slug, title, body }) | ||
| body: JSON.stringify({ slug, title, body, trailers }), | ||
| }); | ||
| const data = await res.json(); | ||
| if (!res.ok) { | ||
| throw new Error(data.error || `save failed (${res.status})`); | ||
| } | ||
| if (!data.sha) { | ||
| throw new Error('save failed (missing commit sha)'); | ||
| } | ||
| currentSha = data.sha; | ||
| document.getElementById('status').textContent = `Saved ${data.sha.slice(0,7)}`; | ||
| await fetchList(); | ||
| } catch (err) { | ||
| document.getElementById('status').textContent = `Save failed: ${err.message}`; | ||
| } | ||
| } | ||
| if (!res.ok) throw new Error(`Snapshot failed: ${res.status}`); | ||
| return res.json(); | ||
| }, | ||
|
|
||
| async function publish() { | ||
| const slug = document.getElementById('slugInput').value; | ||
| if (!slug) return alert('Slug required'); | ||
| if (!confirm(`Publish ${slug}? This will update the public ref.`)) return; | ||
| async publish({ slug, sha }) { | ||
| const res = await fetch(`${API_BASE}/publish`, { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify({ slug, sha }), | ||
| }); | ||
| if (!res.ok) throw new Error(`Publish failed: ${res.status}`); | ||
| return res.json(); | ||
| }, | ||
|
|
||
| document.getElementById('status').textContent = 'Publishing...'; | ||
| try { | ||
| const res = await fetch(`${API}/publish`, { | ||
| async upload({ slug, filename, data }) { | ||
| const res = await fetch(`${API_BASE}/upload`, { | ||
| method: 'POST', | ||
| headers: { 'Content-Type': 'application/json' }, | ||
| body: JSON.stringify({ slug, sha: currentSha }) // publish current loaded sha | ||
| body: JSON.stringify({ slug, filename, data }), | ||
| }); | ||
| if (!res.ok) throw new Error(`Upload failed: ${res.status}`); | ||
| return res.json(); | ||
| }, |
There was a problem hiding this comment.
API layer discards structured server error responses.
The CHANGELOG (line 28) states the server now returns structured { code, field } errors, but every API method throws new Error(...failed: ${res.status}) without reading the response body. The user sees "Save failed: Snapshot failed: 422" instead of actionable feedback like "invalid slug" or "title required."
Proposed fix — parse error body before throwing
async snapshot({ slug, title, body, trailers }) {
const res = await fetch(`${API_BASE}/snapshot`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ slug, title, body, trailers }),
});
- if (!res.ok) throw new Error(`Snapshot failed: ${res.status}`);
+ if (!res.ok) {
+ const err = await res.json().catch(() => ({}));
+ throw new Error(err.code || `Snapshot failed: ${res.status}`);
+ }
return res.json();
},Apply the same pattern to publish, upload, etc.
🤖 Prompt for AI Agents
In `@public/index.html` around lines 498 - 539, The API methods (api.list,
api.show, api.snapshot, api.publish, api.upload) currently throw generic errors
using only res.status and discard the server's structured error payload; update
each method to, on !res.ok, await res.json() (or safe-parse the body) and
include the returned structured error fields (e.g., code, field, message) in the
thrown error (or throw a new Error with a combined message/object) so callers
receive actionable feedback from the server instead of just the HTTP status.
| async list(kind) { | ||
| const res = await fetch(`${API_BASE}/list?kind=${kind}`); | ||
| if (!res.ok) throw new Error(`List failed: ${res.status}`); | ||
| return res.json(); | ||
| }, |
There was a problem hiding this comment.
kind parameter not URL-encoded in api.list().
While kind is currently always 'articles' or 'published' from hardcoded data-kind attributes, it should still be encoded for defense-in-depth, consistent with how slug is handled on line 506.
async list(kind) {
- const res = await fetch(`${API_BASE}/list?kind=${kind}`);
+ const res = await fetch(`${API_BASE}/list?kind=${encodeURIComponent(kind)}`);🤖 Prompt for AI Agents
In `@public/index.html` around lines 499 - 503, The api.list method does not
URL-encode the kind parameter before building the request URL, so update the
async list(kind) function to encode kind (e.g., using encodeURIComponent) when
constructing fetch(`${API_BASE}/list?kind=${...}`) to match how slug is handled
elsewhere; ensure you only change the URL interpolation for kind in api.list and
keep error handling and return res.json() intact.
| function updatePreview() { | ||
| clearTimeout(previewTimer); | ||
| previewTimer = setTimeout(() => { | ||
| const body = document.getElementById('bodyInput').value; | ||
| document.getElementById('previewPane').innerHTML = DOMPurify.sanitize(marked.parse(body || '')); | ||
| }, 150); | ||
| } |
There was a problem hiding this comment.
No guard against CDN script load failure.
If marked or DOMPurify fail to load (SRI mismatch, network error, ad blocker), marked.parse() and DOMPurify.sanitize() throw ReferenceError on every keystroke. Given the floating marked@15 SRI concern, this is a realistic failure mode.
function updatePreview() {
clearTimeout(previewTimer);
previewTimer = setTimeout(() => {
+ if (typeof marked === 'undefined' || typeof DOMPurify === 'undefined') return;
const body = document.getElementById('bodyInput').value;
document.getElementById('previewPane').innerHTML = DOMPurify.sanitize(marked.parse(body || ''));
}, 150);
}🤖 Prompt for AI Agents
In `@public/index.html` around lines 553 - 559, updatePreview currently calls
marked.parse and DOMPurify.sanitize unguarded, causing ReferenceError if those
CDN scripts fail; wrap the parsing/sanitizing call in a try/catch and guard for
the existence of marked and DOMPurify (check typeof marked?.parse and typeof
DOMPurify?.sanitize) inside updatePreview (still using previewTimer debounce),
and on failure fall back to a safe no-op rendering like setting
previewPane.textContent = body (or a safely escaped string) and log the error to
console; update references in the function to use
document.getElementById('bodyInput') and 'previewPane' as before but branch to
the fallback when marked.parse or DOMPurify.sanitize are unavailable or throw.
| markDirty() { | ||
| if (!state.dirty) { | ||
| state.dirty = true; | ||
| } | ||
| // Autosave for existing articles (skip if already saving) | ||
| this.clearAutosave(); | ||
| if (state.currentSlug && !state.saving) { | ||
| state.autosaveTimer = setTimeout(() => this.save(), 3000); | ||
| } | ||
| }, |
There was a problem hiding this comment.
Autosave never re-arms after a save completes while user was typing.
This is the other side of the data-loss bug I flagged on save(). If state.saving is true when markDirty() fires (line 881), the autosave timer is never set. When save() finishes, nothing re-checks whether the content is dirty and needs another save. Combined with the dirty-flag clearing issue, edits made during an in-flight save are silently orphaned.
The finally block in save() should re-arm autosave when state.dirty is still true.
🤖 Prompt for AI Agents
In `@public/index.html` around lines 875 - 884, markDirty() clears and sets the
autosave timer only when state.saving is false, so if a save was in-flight edits
become orphaned; update the finally block of save() to re-arm autosave when
appropriate: after setting state.saving = false (in save's finally) check if
state.dirty is true and state.currentSlug is set and there is no existing
autosave timer, then call the same logic used in markDirty() (clearAutosave()
then set state.autosaveTimer = setTimeout(() => this.save(), 3000)) so in-flight
saves that completed while the user typed will schedule a follow-up autosave;
reference save(), markDirty(), state.dirty, state.saving, state.autosaveTimer,
state.currentSlug, and clearAutosave().
| if (e.key === 'Escape' && document.getElementById('editorPanel').contains(document.activeElement)) { | ||
| UI.deselect(); | ||
| } |
There was a problem hiding this comment.
Escape while typing in the editor textarea triggers deselect() — discards work.
Line 928 checks editorPanel.contains(document.activeElement), which is true when the user is focused on bodyInput (the textarea). Pressing Escape while typing invokes deselect(), which prompts "discard unsaved changes?" — a jarring interruption mid-thought. The Escape shortcut should exclude input elements.
- if (e.key === 'Escape' && document.getElementById('editorPanel').contains(document.activeElement)) {
+ if (e.key === 'Escape'
+ && !['INPUT', 'TEXTAREA', 'SELECT'].includes(document.activeElement?.tagName)
+ && document.getElementById('editorPanel').style.display !== 'none') {
UI.deselect();
}🤖 Prompt for AI Agents
In `@public/index.html` around lines 928 - 930, The Escape key handler currently
calls UI.deselect() when
document.getElementById('editorPanel').contains(document.activeElement), which
still triggers while typing in the editor textarea; change the condition so
Escape is ignored when focus is inside form controls or editable elements.
Update the key handler (the block referencing
document.getElementById('editorPanel') and UI.deselect()) to first test
document.activeElement and skip deselect if activeElement.matches('input,
textarea, select, [contenteditable="true"]') or if activeElement.tagName is one
of those, otherwise call UI.deselect(); keep the existing contains(...) check so
non-input focus inside editor still allows the shortcut. Ensure the new check
covers textareas, inputs, selects and contentEditable elements.
| // Read current draft content and re-commit with status: unpublished | ||
| const message = await this.graph.showNode(draftSha); | ||
| const decoded = this.codec.decode(message); | ||
| const { updatedat: _, ...restTrailers } = decoded.trailers; | ||
| const newMessage = this.codec.encode({ | ||
| title: decoded.title, | ||
| body: decoded.body, | ||
| trailers: { ...restTrailers, status: STATES.UNPUBLISHED, updatedAt: new Date().toISOString() }, | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Clarify why updatedat (lowercase) is destructured when the new key is updatedAt (camelCase).
Line 253 destructures updatedat (lowercase), then line 257 writes updatedAt (camelCase). This works because the trailer codec decodes keys to lowercase — so the destructuring strips the old value before re-adding it in camelCase. Same pattern at line 302 in revertArticle. The logic is correct, but without a comment this reads like a typo to every future reader.
📝 Proposed comment
const message = await this.graph.showNode(draftSha);
const decoded = this.codec.decode(message);
+ // Codec decodes trailer keys to lowercase; strip old `updatedat` before re-adding as camelCase
const { updatedat: _, ...restTrailers } = decoded.trailers;🤖 Prompt for AI Agents
In `@src/lib/CmsService.js` around lines 250 - 258, The destructuring removes the
decoded.trailers key named "updatedat" (lowercase) before re-adding a canonical
"updatedAt" (camelCase); add a concise inline comment above the destructuring in
the block that creates newMessage (the decoded.trailers destructuring that
extracts updatedat) explaining that the trailer codec decodes keys to lowercase,
so we intentionally strip "updatedat" and reintroduce "updatedAt" when encoding
the new message, and mirror the same explanatory comment in the corresponding
destructuring in revertArticle to avoid confusion for future readers.
| const parentCommitSha = info.parents[0]; | ||
| const parentMessage = await this.graph.showNode(parentCommitSha); | ||
| const parentDecoded = this.codec.decode(parentMessage); | ||
| const { updatedat: _u, ...restParentTrailers } = parentDecoded.trailers; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Same lowercase-destructure pattern — same comment needed.
Line 302 mirrors the updatedat destructuring from unpublishArticle. Same rationale, same readability concern.
📝 Proposed comment
const parentMessage = await this.graph.showNode(parentCommitSha);
const parentDecoded = this.codec.decode(parentMessage);
+ // Codec decodes trailer keys to lowercase; strip old `updatedat` before re-adding as camelCase
const { updatedat: _u, ...restParentTrailers } = parentDecoded.trailers;🤖 Prompt for AI Agents
In `@src/lib/CmsService.js` around lines 299 - 302, The destructuring uses the
lowercase alias pattern `updatedat: _u` which reduces readability; in the block
handling parentDecoded.trailers (around parentCommitSha, parentMessage,
parentDecoded) change the alias to a clearer name (e.g. `updatedat: _updatedAt`
or `updatedat: updatedAt`) instead of `_u`, keeping the original property key
(`updatedat`) but mapping it to a readable local identifier, and leave the rest
spread as `restParentTrailers`.
| it('cannot unpublish a reverted article', async () => { | ||
| await cms.saveSnapshot({ slug: 'sm-rev-unpub', title: 'v1', body: 'b1' }); | ||
| await cms.saveSnapshot({ slug: 'sm-rev-unpub', title: 'v2', body: 'b2' }); | ||
| await cms.revertArticle({ slug: 'sm-rev-unpub' }); | ||
|
|
||
| await expect(cms.unpublishArticle({ slug: 'sm-rev-unpub' })).rejects.toMatchObject({ | ||
| name: 'CmsValidationError', | ||
| code: 'invalid_state_transition', | ||
| }); | ||
| }); | ||
|
|
||
| it('cannot revert a reverted article (double revert)', async () => { | ||
| await cms.saveSnapshot({ slug: 'sm-dbl-rev', title: 'v1', body: 'b1' }); | ||
| await cms.saveSnapshot({ slug: 'sm-dbl-rev', title: 'v2', body: 'b2' }); | ||
| await cms.revertArticle({ slug: 'sm-dbl-rev' }); | ||
|
|
||
| await expect(cms.revertArticle({ slug: 'sm-dbl-rev' })).rejects.toMatchObject({ | ||
| name: 'CmsValidationError', | ||
| code: 'invalid_state_transition', | ||
| }); | ||
| }); | ||
|
|
||
| it('cannot revert an unpublished article', async () => { | ||
| await cms.saveSnapshot({ slug: 'sm-unpub-rev', title: 'v1', body: 'b1' }); | ||
| await cms.publishArticle({ slug: 'sm-unpub-rev' }); | ||
| await cms.unpublishArticle({ slug: 'sm-unpub-rev' }); | ||
|
|
||
| await expect(cms.revertArticle({ slug: 'sm-unpub-rev' })).rejects.toMatchObject({ | ||
| name: 'CmsValidationError', | ||
| code: 'invalid_state_transition', | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Missing negative test: unpublished → unpublish (double unpublish).
You've added reverted → unpublish (line 229), reverted → revert (line 240), and unpublished → revert (line 251) — all addressing prior feedback. But unpublished → unpublish is conspicuously absent. If the state machine disallows it, lock it down with a test.
📝 Proposed test
it('cannot unpublish an already unpublished article (double unpublish)', async () => {
await cms.saveSnapshot({ slug: 'sm-dbl-unpub', title: 'v1', body: 'b1' });
await cms.publishArticle({ slug: 'sm-dbl-unpub' });
await cms.unpublishArticle({ slug: 'sm-dbl-unpub' });
await expect(cms.unpublishArticle({ slug: 'sm-dbl-unpub' })).rejects.toMatchObject({
name: 'CmsValidationError',
code: 'invalid_state_transition',
});
});🤖 Prompt for AI Agents
In `@test/git.test.js` around lines 229 - 260, Add a new negative test verifying
double-unpublish is rejected: create an article via cms.saveSnapshot, publish it
with cms.publishArticle, unpublish it with cms.unpublishArticle, then assert
that a second call to cms.unpublishArticle rejects with CmsValidationError /
code 'invalid_state_transition' (mirror the style of the surrounding tests like
the existing reverted/unpublish and reverted/revert tests to keep consistency).
Summary
CmsValidationErrorcontractgraphfor zero-subprocess unit tests (~11ms vs hundreds of ms)@git-stunts/alfredanddocker-guarddeps, dependency integrity checks, ROADMAP, CHANGELOG, formal LaTeX ADR, onboarding scriptsTest plan
npm run test:local— 30 tests pass (3 suites: git, server, chunks)npm run test:git-e2e— 2 real-git smoke tests pass