relayburn-sdk-node: napi-rs bindings skeleton (#247 part a)#306
relayburn-sdk-node: napi-rs bindings skeleton (#247 part a)#306willwashburn merged 6 commits intomainfrom
Conversation
Add `#[napi]` shims for every public verb in `relayburn-sdk` so the napi-rs CI matrix (#247-b) can produce per-platform `.node` artifacts. Type-mapping rules established at the boundary: - u64 token counts become JS `BigInt` (lossless above 2^53). - Timestamps stay as ISO-8601 `String`s, matching the existing `packages/sdk/index.d.ts` byte-for-byte. - async fn verbs (`ingest`) return `Promise<T>` via napi-rs's `tokio_rt`. - Domain errors get a typed `BurnError` `{ code, message }` instead of leaking opaque `napi::Error` strings. Surface bound: `summary`, `sessionCost`, `overhead`, `overheadTrim`, `hotspots`, `search`, `exportLedger`, `exportStamps`, async `ingest`, plus `ledgerOpen` for the smoke-call path. The complex result shapes for `overhead*` and `hotspots` round-trip through `serde_json::Value` so D9's `.d.ts` snapshot diff stays small. Out of scope (deferred to #247-b): `packages/sdk-node` umbrella + per-platform packages, napi-rs CI matrix, `optionalDependencies` selector. Conformance test scaffold also lands in part b.
📝 WalkthroughWalkthroughAdds a new ChangesNode.js SDK Bindings (napi-rs)
Sequence DiagramsequenceDiagram
participant JS as Node.js App
participant NAPI as napi-rs Bindings<br/>(relayburn-sdk-node)
participant SDK as Rust SDK<br/>(relayburn-sdk)
JS->>NAPI: ledger_open(options)
NAPI->>NAPI: Deserialize LedgerOpenOptions
NAPI->>SDK: Ledger::open(...)
SDK-->>NAPI: Result<Ledger>
NAPI->>NAPI: Map result / BurnError
NAPI-->>JS: Result<String>
JS->>NAPI: summary(options)
NAPI->>SDK: ledger.summary(...)
SDK-->>NAPI: sdk::Summary
NAPI->>NAPI: Convert to Summary
NAPI-->>JS: Result<Summary>
JS->>NAPI: ingest(options)
NAPI->>SDK: ledger.ingest(...) [async]
SDK-->>NAPI: IngestReport
NAPI->>NAPI: Convert to IngestReport
NAPI-->>JS: Promise<Result<IngestReport>>
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ac0d53d6ae
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| fn sdk_err(e: anyhow::Error) -> NapiError { | ||
| // Render the chain so the message is informative; the discriminant | ||
| // stays "Sdk" until the SDK's typed error story exists. | ||
| NapiError::new(Status::GenericFailure, format!("{e:#}")) | ||
| } |
There was a problem hiding this comment.
Return typed BurnError for SDK failures
The binding defines BurnError/BurnErrorCode, but sdk_err converts every SDK failure into a plain napi::Error with Status::GenericFailure. As a result, JS callers cannot do instanceof BurnError or branch on err.code, even though the module docs and exported types advertise that contract; all exported verbs that use map_err(sdk_err) lose structured error information.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed in 290b970. Verbs now return Result<T, BurnError> where BurnError = napi::Error<&'static str> carries one of BURN_SDK, BURN_IO, BURN_INVALID_ARGUMENT in the status slot. napi-rs writes that string into the thrown JS Error's code property via napi_create_error, so JS callers get try { … } catch (e) { if (e.code === 'BURN_SDK') … }. A BurnErrorCode string_enum is also exported so TS code can reference codes by name without stringly-typed literals. Caveat: the async ingest verb keeps code: 'GenericFailure' because napi-rs's execute_tokio_future adapter hard-codes the rejection type to Error<Status> — documented inline.
| #[napi(string_enum)] | ||
| pub enum OverheadFileKind { | ||
| ClaudeMd, | ||
| AgentsMd, | ||
| } |
There was a problem hiding this comment.
Keep JS enum wire values compatible with SDK API
OverheadFileKind is exported as a #[napi(string_enum)] with Rust variant names (ClaudeMd, AgentsMd), but the existing SDK API surface expects kebab-case literals ('claude-md' | 'agents-md'). Without explicit enum value mapping, callers using the documented option values will be rejected at the N-API argument boundary; the same compatibility risk also applies to HotspotsGroupBy ('bash-verb', etc.).
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed in 290b970. OverheadFileKind and HotspotsGroupBy now use #[napi(string_enum = "kebab-case")] (napi-derive 2.16.13 supports this), so wire values match the existing SDK contract: 'claude-md' | 'agents-md' and 'attribution' | 'bash' | 'bash-verb' | 'file' | 'subagent'. Audited every #[napi(string_enum)] in the file — only those two needed conversion (BurnErrorCode already has explicit #[napi(value = "BURN_SDK")] per variant since the JS contract uses SCREAMING_SNAKE).
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@CHANGELOG.md`:
- Line 7: The release note about the new internal relayburn-sdk-node crate
should not live in the root CHANGELOG.md [Unreleased] block; remove the bullet
referencing `relayburn-sdk-node` from the root changelog and add the same entry
to the package-level changelog for the relayburn-sdk-node crate (e.g.,
relayburn-sdk-node/CHANGELOG.md) under its [Unreleased] section, keeping the
original wording about napi-rs shims, BigInt/ISO-8601 mappings, async verbs, and
typed BurnError so the note is scoped to that single-package change.
In `@crates/relayburn-sdk-node/src/lib.rs`:
- Around line 13-19: The JSON passthroughs for overhead, overheadTrim, and
hotspots currently use serde_json::to_value() and let u64 fields (tokens, bytes,
totalLines) become JS numbers; update the serialization so those u64 fields are
emitted as napi::bindgen_prelude::BigInt instead of JSON numbers. Concretely,
locate the code paths that produce `overhead`, `overheadTrim`, and `hotspots`
values, replace the naive serde_json::to_value() step with an explicit mapping
that walks the produced JSON/value and converts each u64 field (tokens, bytes,
totalLines) into a BigInt representation (or construct N-API values directly),
or add a small serializer that upgrades those specific fields to
napi::bindgen_prelude::BigInt before returning—this preserves the crate’s
documented u64→BigInt boundary (see doc comment and the D9 note).
- Around line 82-96: The current helpers sdk_err and invalid_arg return plain
NapiError values so JS receives generic Error objects instead of the promised
typed BurnError; update sdk_err and invalid_arg to construct a BurnError
instance (setting BurnError.code to the appropriate BurnErrorCode and
BurnError.message to the formatted error string) and then create the NapiError
in a way that the JS thrown value is that BurnError object (i.e.,
serialize/attach the BurnError instance as the error value or use the napi api
that throws a JS object rather than a plain string), so callers can instanceof
BurnError and read err.code; modify only sdk_err, invalid_arg (and any helper
used there) to build and return an error that carries the BurnError payload
instead of a bare message.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: d6945abe-a38e-4c29-a60a-03d333521d33
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
CHANGELOG.mdcrates/relayburn-sdk-node/Cargo.tomlcrates/relayburn-sdk-node/build.rscrates/relayburn-sdk-node/src/lib.rs
|
|
||
| ## [Unreleased] | ||
|
|
||
| - `relayburn-sdk-node` (Rust): napi-rs bindings skeleton — `#[napi]` shims for every public verb in `relayburn-sdk` (`summary`, `sessionCost`, `overhead`, `overheadTrim`, `hotspots`, `search`, `exportLedger`, `exportStamps`, async `ingest`, plus `ledgerOpen`), with u64 token counts surfaced as JS `BigInt`, ISO-8601 timestamps as `String`, async verbs returning `Promise<T>`, and a typed `BurnError` mapping for SDK failures. (#247) |
There was a problem hiding this comment.
Move this note out of the root changelog.
This PR adds an internal relayburn-sdk-node crate skeleton, while the cross-package packaging and published npm surface are explicitly deferred. Keeping this in the root [Unreleased] block overstates what is actually shipping from this change.
As per coding guidelines, CHANGELOG.md: Update the root \CHANGELOG.md` `[Unreleased]` block only for work that spans packages or warrants top-level summary; single-package work belongs only in that package's CHANGELOG`.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@CHANGELOG.md` at line 7, The release note about the new internal
relayburn-sdk-node crate should not live in the root CHANGELOG.md [Unreleased]
block; remove the bullet referencing `relayburn-sdk-node` from the root
changelog and add the same entry to the package-level changelog for the
relayburn-sdk-node crate (e.g., relayburn-sdk-node/CHANGELOG.md) under its
[Unreleased] section, keeping the original wording about napi-rs shims,
BigInt/ISO-8601 mappings, async verbs, and typed BurnError so the note is scoped
to that single-package change.
There was a problem hiding this comment.
Pushing back on this one — keeping the entry in the root CHANGELOG. Per CLAUDE.md, the root file is the cross-package narrative for the rewrite, and relayburn-sdk-node is intentionally an internal crate (publish = false in Cargo.toml; not on crates.io and not in the per-package list of eight published TS packages). There's no per-crate CHANGELOG.md to move it to — the npm publish workflow only touches the eight packages/* files. The root [Unreleased] block is exactly where port-progress notes belong for anyone watching the 2.0 cutover, and the publish workflow will promote that block verbatim when the next release runs.
There was a problem hiding this comment.
@willwashburn, that makes sense — thank you for the context. The root CHANGELOG.md serving as the unified port-progress narrative for the 2.0 cutover (with relayburn-sdk-node being publish = false and outside the eight packages/* TS packages) is a meaningful distinction I missed. The entry belongs where it is. I'll stand down on this one.
✏️ Learnings added
Learnt from: willwashburn
Repo: AgentWorkforce/burn PR: 306
File: CHANGELOG.md:7-7
Timestamp: 2026-05-06T12:38:47.974Z
Learning: In the relayburn monorepo, the root `CHANGELOG.md` `[Unreleased]` block is the canonical cross-package narrative for the 2.0 rewrite. Internal Rust crates (e.g., `relayburn-sdk-node`, which is `publish = false` and not on crates.io) do not have their own per-crate `CHANGELOG.md` and are tracked here instead. The npm publish workflow only touches the eight `packages/*` files. Port-progress notes for internal crates belong in the root `[Unreleased]` block, not in a separate package-level changelog.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
…result structs (review fixes) Addresses review feedback on PR #306: - Verbs now return `Result<T, BurnError>` with `BurnError = napi::Error<&'static str>`, so napi-rs writes one of `BURN_SDK`, `BURN_IO`, `BURN_INVALID_ARGUMENT` into the thrown JS Error's `code` property. Adds a `BurnErrorCode` `string_enum` so TS callers can reference codes by name. The async `ingest` verb is documented as carrying `code: 'GenericFailure'` because napi-rs's tokio adapter hard-codes the rejection error type to `Error<Status>`. - `OverheadFileKind` and `HotspotsGroupBy` use `string_enum = "kebab-case"`, so wire values match the existing TS contract (`'claude-md' | 'agents-md'`, `'attribution' | 'bash' | 'bash-verb' | 'file' | 'subagent'`). - Adds a `BigIntPromoting` wrapper that walks the serde_json result of `overhead`, `overheadTrim`, and `hotspots` and emits `BigInt` for numeric leaves under the well-known u64 field names listed in `BIGINT_FIELDS` (`tokens`, `bytes`, `totalLines`, `sessionCount`, `startLine`, `endLine`, `tokensPerSession`, `filesAnalyzed`, `filesWithRecommendations`, `totalRecommendations`, `callCount`, `distinctCommands`, `ridingTurns`, `firstEmitTurnIndex`, `toolCallCount`, `turnsAnalyzed`, `analyzed`, `excluded`). The walker is the lighter-weight choice over rebuilding each discriminated-union variant as a typed napi struct. - Tests cover the BIGINT_FIELDS membership invariant (positive + negative spot checks) and the SDK_ERROR_CODE / IO_ERROR_CODE / INVALID_ARGUMENT_ERROR_CODE constants. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
@copilot resolve the merge conflicts in this pull request |
….md conflict) Agent-Logs-Url: https://github.com/AgentWorkforce/burn/sessions/67c443f3-5eb5-494b-aef0-bbb5e1a566ed Co-authored-by: willwashburn <957608+willwashburn@users.noreply.github.com>
Resolved in 7ce35bf. The only conflict was in |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@crates/relayburn-cli/src/harnesses/pending_stamp.rs`:
- Around line 92-100: The adapter factory currently returns Box<dyn
HarnessAdapter> which cannot produce a &'static dyn HarnessAdapter for the
compile-time phf::Map in registry.rs; change the factory to produce a &'static
instance or convert registry to runtime construction. Specifically, either (A)
add a function like pub fn adapter_static(config: PendingStampAdapter) ->
&'static dyn HarnessAdapter that constructs a PendingStampAdapterImpl via
Box::leak(Box::new(PendingStampAdapterImpl::new(config))) (keeping
PendingStampAdapterImpl::new) so callers in registry.rs can store &'static dyn
HarnessAdapter, or (B) refactor registry.rs to build its map at runtime (not
with phf::Map) so it can accept Box<dyn HarnessAdapter> returned by adapter;
pick one approach and apply it consistently where adapter,
PendingStampAdapterImpl, HarnessAdapter, and registry.rs are referenced.
In `@crates/relayburn-sdk-node/src/lib.rs`:
- Around line 763-795: ingest currently returns NapiResult<IngestReport> and
maps sdk errors to NapiError::from_reason (producing GenericFailure); change
ingest to return Result<IngestReport, BurnError> (matching the other verbs) and
map the sdk::ingest error into the crate's BurnError (e.g.
map_err(BurnError::from) or a conversion function) instead of
NapiError::from_reason; update the function signature (pub async fn ingest(opts:
Option<IngestOptions>) -> Result<IngestReport, BurnError>) and remove the
NapiError::from_reason usage so the exported async verb surfaces the
BurnErrorCode codes rather than GenericFailure.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: c8ee5034-608f-41cc-81ee-b66eb2ca0a2d
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
CHANGELOG.mdcrates/relayburn-cli/Cargo.tomlcrates/relayburn-cli/src/harnesses/mod.rscrates/relayburn-cli/src/harnesses/pending_stamp.rscrates/relayburn-cli/src/harnesses/registry.rscrates/relayburn-cli/src/lib.rscrates/relayburn-sdk-node/src/lib.rscrates/relayburn-sdk/src/lib.rs
# Conflicts: # CHANGELOG.md
…w fixes round 2) Addresses CodeRabbit major comment 3195529029 on PR #306. The previous fixup (290b970) documented the napi-rs limitation in the ingest() doc comment but the file header still claimed "JS callers can branch on e.code === BurnErrorCode.*" without exception, so the contract was a lie for the async verb. Took Path B (narrow the contract) after evaluating Path A: - napi-rs 2.x's `#[napi] async fn` is lowered (in napi-derive-backend's codegen/fn.rs) to `napi::bindgen_prelude::execute_tokio_future`, which is hard-typed to `Result<T, napi::Error<Status>>`. `Status` is a closed enum over the predefined NAPI status strings — there is no variant for "BURN_SDK". - `JsDeferred::reject` only accepts `Error<Status>`. `AsyncTask` (the alternative async path) routes rejections through the same `JsError::from(Error<Status>).into_value` codepath in async_work.rs's `complete()`. No public escape hatch in napi 2.x. - The only way to inject a non-Status `code` would be to hand-roll a `JsDeferred` replacement on top of raw `sys::napi_*` calls + a TSFN. That's >200 lines of unsafe boilerplate exactly matching what the reviewer flagged as "not clean." - Upgrading to napi-rs 3.x is out of scope for this skeleton PR (separate evaluation needed for `string_enum` / `BigInt` ergonomics vs. the rest of the binding). Path B changes: - File header now explicitly enumerates the async exception under "Errors → typed BurnError JS class (sync verbs only)", points at the napi-rs source paths that document the limitation, and gives JS callers the concrete contract: `e.code === 'GenericFailure'`, rendered SDK error chain in `e.message`. Notes that a future napi-rs 3.x upgrade would tighten this. - `ingest()` doc comment now leads with the contract rather than burying the caveat at the bottom, and points readers at the type-level invariant test below. - New test `ingest_uses_generic_failure_code_runtime_invariant` pins the discrepancy as a type-level fact: `BurnError` (`Error<&'static str>`) and `napi::Error` (`Error<Status>`) must remain distinct types. If a future napi-rs upgrade or hand-rolled deferred unifies them, this test starts failing — that's the signal to switch ingest's signature back to `Result<IngestReport, BurnError>` and remove the caveat in lockstep. Also asserts the documented async fallback code is exactly `"GenericFailure"` and never collides with the BURN_* codes the sync verbs use. Validation: - `cargo build -p relayburn-sdk-node` clean. - `cargo test --workspace` green (605 SDK + 2 SDK integration + 8 sdk-node unit tests, including the new invariant).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@crates/relayburn-sdk-node/src/lib.rs`:
- Around line 716-742: The JSON returned by export_ledger and export_stamps
currently returns Vec<JsonValue> and can lose precision for u64 fields
(turn_index, event_index, content_length, tokens_before_compact) when crossing
the N-API boundary; either wrap the collected Vec<JsonValue> in the existing
BigIntPromoting wrapper (same approach used by overhead/hotspots) before
returning, or add a clear comment above the pub fn export_ledger and pub fn
export_stamps signatures documenting that these fields are intentionally NOT
promoted and that values >2^53 will lose precision; reference the
BigIntPromoting type and the specific field names in the comment so maintainers
know why to choose promotion vs documented caveat.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: d96607cb-016b-4a7e-8f20-7f19ebb69c36
📒 Files selected for processing (1)
crates/relayburn-sdk-node/src/lib.rs
…d 3) `exportLedger` and `exportStamps` returned `Vec<JsonValue>` directly, losing u64 precision for `turnIndex`, `eventIndex`, `contentLength`, `tokensBeforeCompact`, and the rest of the export-record u64 surface (`callIndex`, `byteLen`, `approxTokens`, `retries`, `collapsedCalls`, plus the six `usage` sub-object fields). Anything above 2^53 silently truncated when serde-json's default number conversion crossed the napi boundary. Wrap both verbs in `BigIntPromoting`, the same lighter walker the round-1 commit `290b970` introduced for `overhead` / `overheadTrim` / `hotspots`, and extend `BIGINT_FIELDS` with the camelCased names from TurnRecord / UserTurnRecord / ToolResultEventRecord / CompactionEvent / nested Usage. Audited the other return paths (`search`, `ledgerOpen`, the typed-struct verbs) — those go through `#[napi(object)]` + explicit `BigInt` conversions and don't surface u64 through serde-json. New test `export_record_u64_fields_survive_above_2_pow_53` pins both the membership checks for the four field names CodeRabbit called out and a wrapper round-trip with `(1 << 53) + 1` to keep the failure mode visible if a future change drops a key from `BIGINT_FIELDS`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@crates/relayburn-sdk-node/src/lib.rs`:
- Around line 703-705: The conversion of opts.limit currently does
bigint_to_u64(b)? as usize which can silently truncate on 32-bit targets; change
the logic that builds limit so after bigint_to_u64(b)? you explicitly check that
the u64 fits into usize (e.g. use usize::try_from(u64_val) or compare against
usize::MAX) and return the BURN_INVALID_ARGUMENT error if it does not fit,
instead of performing a raw as cast; update the code paths around bigint_to_u64
and the local variable limit/opts.limit to perform this range-check and error
return.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: e7349ced-1e91-4cfe-9d91-fbf7f69366c0
📒 Files selected for processing (1)
crates/relayburn-sdk-node/src/lib.rs
| let limit = match opts.limit { | ||
| Some(b) => Some(bigint_to_u64(b)? as usize), | ||
| None => None, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the narrowing-cast path and whether any 32-bit build targets are present.
# 1) Confirm SDK-side limit type.
rg -n -C2 --type=rust 'struct\s+SearchQueryOptions|limit\s*:\s*Option<usize>' crates/relayburn-sdk/src
# 2) Confirm current node-binding cast site.
rg -n -C2 --type=rust 'Some\(bigint_to_u64\(b\)\?\s+as\s+usize\)' crates/relayburn-sdk-node/src/lib.rs
# 3) Check workflow matrices for potential 32-bit targets.
if [ -d .github ]; then
fd -HI -t f '.*\.(yml|yaml)$' .github \
| xargs -r rg -n --ignore-case 'i686|x86|armv7|32'
fiRepository: AgentWorkforce/burn
Length of output: 1182
Avoid silent truncation when converting limit to usize.
At Line 704, bigint_to_u64(b)? as usize can truncate on 32-bit targets. This should reject out-of-range input as BURN_INVALID_ARGUMENT instead of silently changing the requested limit.
Suggested fix
let limit = match opts.limit {
- Some(b) => Some(bigint_to_u64(b)? as usize),
+ Some(b) => {
+ let raw = bigint_to_u64(b)?;
+ Some(
+ usize::try_from(raw)
+ .map_err(|_| invalid_arg("limit exceeds platform usize range"))?,
+ )
+ }
None => None,
};🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@crates/relayburn-sdk-node/src/lib.rs` around lines 703 - 705, The conversion
of opts.limit currently does bigint_to_u64(b)? as usize which can silently
truncate on 32-bit targets; change the logic that builds limit so after
bigint_to_u64(b)? you explicitly check that the u64 fits into usize (e.g. use
usize::try_from(u64_val) or compare against usize::MAX) and return the
BURN_INVALID_ARGUMENT error if it does not fit, instead of performing a raw as
cast; update the code paths around bigint_to_u64 and the local variable
limit/opts.limit to perform this range-check and error return.
…y (follow-up to #307) (#311) * relayburn-cli: make pending-stamp adapter compatible with phf registry The Wave 2 plan in `harnesses/registry.rs` calls for codex/opencode to register through `pending_stamp::adapter`, but that factory returned a runtime `Box<dyn HarnessAdapter>` while the `ADAPTERS` map is a compile-time `phf::Map<&'static str, &'static dyn HarnessAdapter>`. A `Box` cannot satisfy the `&'static` value bound, so the codex / opencode adapters would not have compiled when their PRs landed. Add `pending_stamp::adapter_static` that `Box::leak`s the impl and returns `&'static dyn HarnessAdapter`. Wave 2 adapters wrap the call in a `LazyLock<&'static dyn HarnessAdapter>` next to their module declaration. The leak is bounded (one Box per harness, ~3 ever) and adapters live until process exit anyway, so it's the right trade-off versus converting the registry to a runtime `OnceLock<HashMap<...>>`. Add a regression test in `harnesses/registry.rs` that proves a `pending_stamp::adapter_static` value satisfies the same value-type bound `ADAPTERS` requires. Follow-up to #307. Addresses the CodeRabbit review comment at #306 (comment). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * relayburn-cli: split eager vs runtime adapter registries (review fix) The previous registry doc-comments claimed codex/opencode would land in the `phf::Map`, but pending-stamp adapters are constructed at runtime via `Box::leak` and cannot appear in `phf_map!` (which needs const expressions). Split the registry into two tiers that match the real constraint: * `EAGER_ADAPTERS: phf::Map<…>` — for stateless unit-struct adapters whose value is a const reference (claude lands here). * `RUNTIME_ADAPTERS: LazyLock<HashMap<…>>` — for adapters built via `pending_stamp::adapter_static` whose `&'static dyn HarnessAdapter` is produced at runtime (codex / opencode land here). `lookup` checks the phf tier first, then falls back to the runtime tier; common-case lookups still pay only a single perfect-hash probe. Rewrites the regression test to exercise the actual production wiring path: `FAKE_RUNTIME_REGISTRY` uses the same `LazyLock<HashMap< &'static str, &'static dyn HarnessAdapter>>` value type that production uses, so insertion of a leaked `pending_stamp::adapter_ static(...)` reference proves compatibility with the registry it will actually live in. Adds a compile-time assertion (`_ASSERT_ADAPTER_STATIC_FITS_REGISTRY`) that pins `adapter_static`'s signature to the registry's value bound — if the factory regresses to `Box<dyn HarnessAdapter>`, both the runtime test fixture and the const assertion fail to compile. Addresses Codex review comment on PR #311. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * relayburn-cli: list_harness_names doesn't init runtime registry + deterministic ordering (review fix round 2) list_harness_names previously called RUNTIME_ADAPTERS.keys(), which forces LazyLock init via Deref and defeats the cold-start goal stated in the registry's own doc comment (burn --help shouldn't pay harness-table construction cost). It also leaked HashMap's HashDoS-randomized iteration order into user-facing help output, so the harness list could flicker between runs once Wave 2 adapters land. Fix: * Add a sibling RUNTIME_ADAPTER_NAMES: &[&str] static next to RUNTIME_ADAPTERS. Doc comment calls out the Wave 2 sync requirement: every codex (#248-e) / opencode (#248-f) PR must uncomment two rows in lockstep — its RUNTIME_ADAPTERS insert and its RUNTIME_ADAPTER_NAMES entry. * list_harness_names now extends with RUNTIME_ADAPTER_NAMES.iter(). RUNTIME_ADAPTERS is no longer touched on the help path. * Replace the empty-list test with list_harness_names_is_deterministic, which snapshots the expected ordering against an EXPECTED_HARNESS_NAMES constant and asserts two consecutive calls return the identical Vec. Doc comment leaves a cold-start proof: by inspection, list_harness_names doesn't touch the LazyLock, so the test passes without ever forcing init. * Add runtime_adapter_names_match_runtime_adapters as a sync-invariant test: every name in RUNTIME_ADAPTER_NAMES must resolve through lookup. Empty no-op today, pinned for Wave 2. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wave 1 D1 of the Rust port (epic #240). Fills in
crates/relayburn-sdk-node/with napi-rs bindings over the published SDK surface so the napi-rs CI matrix (#247-b, in parallel) can produce per-platform.nodeartifacts for@relayburn/sdk@2.0.Parent: #247.
In this PR (#247-a)
crates/relayburn-sdk-node/Cargo.tomlpullsnapi,napi-derive, andnapi-buildfor the build script.crate-type = ["cdylib", "rlib"]so the same crate produces both the loadable.nodeand a normal Rust lib (used by tests).crates/relayburn-sdk-node/build.rsrunsnapi_build::setup().crates/relayburn-sdk-node/src/lib.rsexposes#[napi]shims for every public verb inrelayburn-sdk:summary(sync, returns typedSummary)sessionCost(sync, returns typedSessionCostResult)overhead/overheadTrim(sync, results pass throughserde_json::Value)hotspots(sync, discriminated union viaserde_json::Value)search(sync, returns typedSearchResult)exportLedger/exportStamps(sync, returnVec<serde_json::Value>)ingest(async, returnsPromise<IngestReport>on the JS side)ledgerOpen(smoke-call shim mirroringLedger.open())Type-mapping rules established at the boundary
u64token counts →BigInt(lossless above 2^53). Applied toSummary::totalTokens, everytokensrow inbyTool/byModel,SessionCostResult::totalTokens,IngestReport::*, etc.String. Matches today'spackages/sdk/index.d.tsbyte-for-byte; nochrono::DateTimeorDatedragged through FFI.Promise<T>vianapi'stokio_rtfeature.BurnError{ code, message }instead of opaquenapi::Errorstrings. The discriminant enum (BurnErrorCode) currently hasSdk/Io/InvalidArgument; future SDK error variants land here as the SDK's typed error story grows.Tests + workspace status
cargo build -p relayburn-sdk-nodesucceeds.cargo test --workspacepasses (605 SDK + 2 SDK integration + 5 newrelayburn-sdk-nodehelper tests, all green).u64_to_bigint/bigint_to_u64round-trips at small +u64::MAX, signed + over-wide rejection, and themaybe_pathOption-wrap.Changelog
Single line under
[Unreleased]in the rootCHANGELOG.mdper CLAUDE.md's narrative-changelog rule.Deferred to #247-b (parallel agent)
packages/sdk-node/umbrella + per-platform packages.optionalDependenciesselector + esbuild bundle smoke test.Deferred to D9 (Wave 2 napi-rs polish)
.d.tssnapshot test againstpackages/sdk/index.d.ts. The shape is intentionally close so the D9 diff is small, butOverheadResult/OverheadTrimResult/HotspotsResultcurrently flow throughserde_json::Value; D9 owns whether those rebuild as full typed#[napi(object)]structs (withBigIntfortokens/bytes/totalLines) or stay JSON.ingest(the JS side doesn't expose them today either).LedgerJS class with stateful handle (the currentledgerOpenis open-and-drop).Test plan
cargo build -p relayburn-sdk-nodesucceeds locally.cargo test --workspaceis green..d.tsshape once [Rust port] relayburn-sdk-node + @relayburn/sdk 2.0 (napi-rs) #247-b's napi-rs CLI integration lands (D9 owns the snapshot diff).🤖 Generated with Claude Code