fix(wasm): harden ingestion and require explicit unverified export#105
fix(wasm): harden ingestion and require explicit unverified export#105danielewood merged 5 commits intodevelopfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR hardens ingestion and network-fetch behavior across the WASM build, the web AIA proxy, and CRL handling, and removes the silent “verified → unverified” export fallback by requiring explicit user confirmation.
Changes:
- Add strict size limits + panic recovery to WASM ingestion (
addFiles/inspect) and improve WASM AIA fetch timeout/callback cleanup behavior. - Enforce CRL size caps for both HTTP responses and local files (new shared reader), wiring the CLI CRL command to the bounded reader.
- Add an explicit upstream timeout/abort path to the web AIA proxy and regression coverage; update the web UI to explicitly prompt before unverified export retry.
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| web/public/app.js | Add fetch timeouts for WASM AIA fetch and require explicit user confirmation before retrying export without verification. |
| web/functions/api/fetch.ts | Add upstream fetch timeout via AbortController; map aborts to 504. |
| web/functions/api/fetch.test.ts | Add deterministic test ensuring aborts map to HTTP 504. |
| crl.go | Add bounded CRL reader helper and new ReadCRLFile with shared size cap. |
| crl_test.go | Add tests covering CRL response/file size-limit enforcement. |
| cmd/certkit/crl.go | Switch CLI CRL local-file reads to bounded certkit.ReadCRLFile. |
| cmd/wasm/main.go | Add WASM ingestion caps and shared readWASMFileData; plumb explicit unverified-export option into JS API. |
| cmd/wasm/inspect.go | Apply the same WASM ingestion limits to the inspect path. |
| cmd/wasm/export.go | Remove silent retry-no-verify fallback; add explicit verified-failure error context. |
| cmd/wasm/aia.go | Ensure JS promise callbacks are released even on cancellation; propagate timeouts to JS fetch. |
| CHANGELOG.md | Document behavior/security/fix changes under Unreleased. |
| if opts.Verify { | ||
| return nil, fmt.Errorf("verified export failed: %w", err) | ||
| } | ||
| return nil, err |
There was a problem hiding this comment.
ERR-1 (MUST): when allowUnverifiedExport is true, the error from ExportMatchedBundles is returned unwrapped (return nil, err). Wrap it with context using %w (even if it’s the unverified path) so callers always get consistent context and wrapping.
| return nil, err | |
| return nil, fmt.Errorf("unverified export failed: %w", err) |
There was a problem hiding this comment.
Fixed in 3676d17. The unverified branch now wraps the export error with context ("unverified export failed: %w") so both paths satisfy ERR-1.
| @@ -80,13 +84,51 @@ func FetchCRL(ctx context.Context, input FetchCRLInput) ([]byte, error) { | |||
| return nil, fmt.Errorf("CRL server returned HTTP %d from %s", resp.StatusCode, input.URL) | |||
There was a problem hiding this comment.
ERR-4 (MUST): error strings must start lowercase. This message starts with "CRL"; consider changing it to start with lowercase (e.g., "crl server returned …") to match the repo’s error-string rule.
| return nil, fmt.Errorf("CRL server returned HTTP %d from %s", resp.StatusCode, input.URL) | |
| return nil, fmt.Errorf("crl server returned HTTP %d from %s", resp.StatusCode, input.URL) |
There was a problem hiding this comment.
Keeping this as CRL by design. Project ERR-4 allows acronym exceptions, and this file consistently uses uppercase CRL in related errors. I did update the newly-added lowercase crl ... exceeds max size strings to uppercase for consistency.
| data, err = os.ReadFile(source) | ||
| data, err = certkit.ReadCRLFile(source) | ||
| if err != nil { | ||
| return fmt.Errorf("reading CRL file: %w", err) |
There was a problem hiding this comment.
cmd/certkit wraps the error from certkit.ReadCRLFile with "reading CRL file" again, which can produce duplicated context like "reading CRL file: opening CRL file: …". Consider either changing this wrapper string to a different context (include the path), or returning the ReadCRLFile error directly (it’s already wrapped with context).
| return fmt.Errorf("reading CRL file: %w", err) | |
| return fmt.Errorf("reading CRL file %s: %w", source, err) |
There was a problem hiding this comment.
Good catch—updated in 3676d17. CLI now wraps as reading CRL file %q: %w so the path context is explicit and avoids repetitive wording.
| } | ||
|
|
||
| data := make([]byte, size) | ||
| js.CopyBytesToGo(data, dataJS) |
There was a problem hiding this comment.
readWASMFileData ignores the return value of js.CopyBytesToGo. If the provided JS value isn’t a Uint8Array (or is shorter than Length()), CopyBytesToGo can copy fewer bytes than expected and leave the rest zeroed, which can lead to confusing parse failures. Check the returned byte count and return an error if it doesn’t match the expected size.
| js.CopyBytesToGo(data, dataJS) | |
| copied := js.CopyBytesToGo(data, dataJS) | |
| if copied != size { | |
| return nil, fmt.Errorf("file %q read incomplete data: expected %d bytes, got %d", name, size, copied) | |
| } |
There was a problem hiding this comment.
Addressed in 3676d17. readWASMFileData now validates the js.CopyBytesToGo byte count and returns a clear error if the copy is incomplete.
| timeoutMillis := 10_000 | ||
| if deadline, ok := ctx.Deadline(); ok { | ||
| remaining := time.Until(deadline).Milliseconds() | ||
| if remaining > 0 { | ||
| timeoutMillis = int(remaining) | ||
| } | ||
| } | ||
|
|
||
| promise := fetchFn.Invoke(url) | ||
| promise := fetchFn.Invoke(url, timeoutMillis) | ||
|
|
There was a problem hiding this comment.
jsFetchURL invokes the JS fetch promise even when ctx is already canceled/expired (and if the deadline is already exceeded, it still passes the default 10s timeout). Add an early if err := ctx.Err(); err != nil { return nil, err } before calling into JS, and treat non-positive remaining deadline as an immediate timeout to avoid starting fetches that can only be discarded.
There was a problem hiding this comment.
Addressed in 3676d17. jsFetchURL now short-circuits on ctx.Err() before invoking JS fetch, and treats non-positive deadline remaining as immediate context.DeadlineExceeded.
| } | ||
|
|
||
| function isAbortError(err: unknown): boolean { | ||
| return err instanceof Error && err.name === "AbortError"; |
There was a problem hiding this comment.
isAbortError is likely too strict in Workers runtimes: fetch aborts commonly reject with a DOMException (name "AbortError") that may not satisfy err instanceof Error. Consider loosening this to check (err as any)?.name === "AbortError" (and/or handle DOMException explicitly) so timeouts are reliably mapped to 504 in production.
| return err instanceof Error && err.name === "AbortError"; | |
| const anyErr = err as { name?: unknown } | null | undefined; | |
| return anyErr?.name === "AbortError"; |
There was a problem hiding this comment.
Addressed in 3676d17. Abort detection now checks err?.name === "AbortError" (not instanceof Error), and I added a regression test for non-Error abort objects in the Workers-style path.
| // When allowUnverifiedExport is true, chain verification is disabled explicitly. | ||
| func exportBundles(ctx context.Context, s *certstore.MemStore, filterSKIs []string, p12Password string, allowUnverifiedExport bool) ([]byte, error) { | ||
| matched := s.MatchedPairs() |
There was a problem hiding this comment.
CS-5 (MUST): exportBundles takes more than 2 non-context parameters (store, filterSKIs, p12Password, allowUnverifiedExport). Convert these into an input struct (ctx stays separate) to match the project’s function-argument style rule and keep call sites readable as options evolve.
There was a problem hiding this comment.
Addressed in 3676d17. exportBundles now takes an exportBundlesInput struct (ctx remains separate), and call sites were updated.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Addressed the remaining review findings in commit 3676d17:
Validation: |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3676d1770d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
Follow-up: all review feedback has been addressed and fixes pushed. Please take another look when you can. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ec492fe782
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| go func() { | ||
| <-done | ||
| releaseCallbacks(thenCb, catchCb) |
There was a problem hiding this comment.
Prevent indefinite goroutine waits after AIA timeout
This cancellation branch starts a goroutine that blocks on done before releasing JS callbacks, but done is only closed when the JS promise resolves/rejects. In window.certkitFetchURL, the timeout is cleared as soon as fetch() returns and body reading happens later via resp.arrayBuffer() (web/public/app.js), so an upstream that sends headers and then stalls the body can leave the promise pending forever; each context timeout then leaks one waiter goroutine (and its callback references), which can accumulate under repeated stalled AIA responses.
Useful? React with 👍 / 👎.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Completed another review-feedback sweep and addressed the new actionable items from comments posted after . Changes pushed in :
Validation: no commit to main........................................................Passed |
|
Follow-up (superseding my previous sweep note): I addressed the new actionable feedback posted after commit ec492fe and pushed commit 37933c5. Implemented:
Validation: pre-commit run --all-files passed locally before commit. |
This comment has been minimized.
This comment has been minimized.
|
Completed another review sweep after commit 37933c5 and found actionable feedback from @claude[bot]. Addressed in commit c4e1fac:
Validation: pre-commit run --all-files passed locally before commit. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c4e1fac445
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const timer = setTimeout(() => controller.abort(), timeoutMs); | ||
| try { | ||
| const resp = await fetch(targetURL, { signal: controller.signal }); | ||
| const body = await resp.arrayBuffer(); |
There was a problem hiding this comment.
Skip buffering body before deciding direct-fetch fallback
fetchBytesWithTimeout reads resp.arrayBuffer() unconditionally, but certkitFetchURL only needs the body when resp.ok is true for the direct-fetch path. When direct fetch returns a non-2xx response with a large or slow body (for example, HTML error pages from intermediaries), this now waits for full body download or timeout before proxy fallback is attempted, adding avoidable latency and memory pressure per AIA URL.
Useful? React with 👍 / 👎.
Code reviewOne issue found. Checked for bugs and CLAUDE.md compliance. GIT-11 — Commit messages describe what, not whyTwo commits in this PR have messages that describe the act of responding to review feedback rather than the technical reason for the changes:
GIT-11 (MUST) requires commit messages to explain why, not what — the diff already shows what changed. GIT-10 confirms this applies to all commits in a PR (enforced by CI on the pr-conventions check). Messages like these provide no standalone value in git log. Prefer messages that describe the motivation, e.g.:
Since the PR is already merged this is historical, but flagging for awareness in future review-iteration commits. No other issues found — the logic, error handling, concurrency patterns, size-cap enforcement, and TypeScript timeout/abort lifecycle are all correct. The defer func() { _ = f.Close() }() pattern in ReadCRLFile is consistent with the established codebase convention used in a dozen other production files. |
* fix(wasm): harden ingestion and require explicit unverified export (#105) * fix(wasm): harden ingest and make unverified export explicit * fix: address PR 105 reliability review feedback * fix(web): enforce upstream timeout through body reads * fix: address latest PR 105 review feedback * fix(crl): clarify read errors and harden size-limit coverage * fix: harden core parsing and identity selection (#107) * fix: harden parsing and issuer/key selection correctness * docs(changelog): reference PR for unreleased parsing fixes * fix: prevent parser fallback and JKS identity regressions * fix(inspect): preserve valid keys when PEM bundle has malformed blocks * fix: address remaining PR 107 review feedback * fix(jks): surface skipped-entry reasons in debug logs * fix(scan): harden scan boundaries and export text summary (#106) * fix(scan): keep traversal bounded and restore export summaries * fix(scan): fail fast on walker processing errors * fix(scan): use typed max-size errors in read paths * fix(scan): reject invalid export formats consistently * fix(scan): keep export destination off stdout * fix(network): harden SSRF fetch validation and apply default connect timeout (#108) * fix(network): harden revocation fetch SSRF checks and connect timeout defaults * fix(network): propagate SSRF validation deadlines and unblock inspect AIA opt-in * fix(bundle): restore private-network opt-in for AIA chain fetches * fix(network): address remaining PR feedback for inspect AIA handling * fix(wasm): keep AIA resolution working without DNS lookups * fix(cli): normalize validation exits, JSON schema, and export defaults (#109) * fix(cli): align validation exits and secure export defaults * fix(cli): address PR #109 review feedback * fix(tests): align PR #109 follow-up feedback * fix(security): remediate current code scanning alerts * build(hooks): consume shared develop branch-name exemption * fix: address PR 110 review feedback
Summary
addFiles/inspectFiles, and tighten JS AIA fetch timeout propagation with callback cleanup on cancellationValidation
Closes #90
Closes #100
Closes #103