refactor: integrate trbr#38
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughVendors the full Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant Capturer
participant Decode
participant Addr2Line
participant GDB
participant FS
Client->>Capturer: push(bytes)
Capturer->>Capturer: decode lines & frame -> dedupe -> emit event
Client->>Decode: decode(params, input)
Decode->>FS: read/write temp ELF (if stream/coredump)
Decode->>Addr2Line: request addr->line mapping
Addr2Line->>GDB: spawn gdb & send commands
GDB-->>Addr2Line: gdb responses
Addr2Line-->>Decode: resolved locations
Decode-->>Client: decoded result / events
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (16)
src/vendor/trbr/exec.js (1)
12-25: Consider documenting the implicit 1 MB maxBuffer default.
execFilewithout an explicitmaxBufferdefaults to ~1 MB and rejects withERR_CHILD_PROCESS_STDIO_MAXBUFFERon overflow. Trbr callers that pipe large GDB/addr2line output through this helper (e.g. coredump dumps) can hit it. Either set a higher default here or call out the limitation in the JSDoc so callers know to override viaoptions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/exec.js` around lines 12 - 25, The exec helper (exported function exec) currently calls execFile without an explicit maxBuffer, risking ERR_CHILD_PROCESS_STDIO_MAXBUFFER for large stdout/stderr; update exec to set a sensible default maxBuffer (e.g., 10 * 1024 * 1024) and merge it with any user-provided options so callers can still override it, or alternatively add a JSDoc comment above exec documenting the implicit ~1MB default and instructing callers to pass options.maxBuffer to override—refer to the exec function and the execFile call when applying the change.src/vendor/trbr/decode/_tinyrainbow.js (1)
1-22: Stub silently degradescolor: 'force'/ default modes.The header comment correctly notes esp-decoder always uses
{ color: 'disable' }, but this module is consumed bystringify.js'sforceand default branches too (per the cross-file snippet). If a future caller (or a vendored trbr code path) changes the option, output will silently lose all colorization with no signal.Consider either:
- exporting the same symbols as no-ops only (current) but adding a runtime warning when
FORCE_COLORis set, or- keeping a minimal ANSI wrapper for
forcemode so the contract isn't quietly broken.Not blocking given the documented invariant; flagging for future-proofing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/_tinyrainbow.js` around lines 1 - 22, The stub currently always returns identity color functions (palette/createColors/identity) which silently drop colorization if callers switch to `force` or default modes; detect when FORCE_COLOR is set (e.g. process.env.FORCE_COLOR truthy) at module load and emit a single runtime warning (console.warn or processLogger.warn) indicating the vendored tinyrainbow is a no-op and will not emit ANSI colors, while leaving createColors, palette and identity behavior unchanged so existing callers continue to work; this preserves the current API (createColors, default export palette) but makes the silent degradation observable.src/vendor/trbr/capturer/framer.js (2)
104-120:hasSignalcheck is unreachable-false; consider removing or commenting.
pushLineonly allocatesthis._activewhenisStartLine(line)is true and that same line is then pushed onto_active.lines, so by construction_active.linesalways contains a start line. ThehasSignal && lines.length > 0guard in_finalizetherefore can never reject. Either drop it or add a comment explaining it as a defensive safeguard against future refactors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/capturer/framer.js` around lines 104 - 120, The hasSignal check in _finalize is redundant because pushLine only creates this._active when isStartLine(line) is true, so update the _finalize method to remove the unreachable hasSignal logic: delete the hasSignal variable and change the guard to just check lines.length > 0 (i.e., if (lines.length > 0) { finalized.push({...}) }), and remove any reference to isStartLine in this function; alternatively, if you prefer defensive coding, keep the hasSignal expression but add a short comment above it explaining it's a defensive safeguard against future refactors and why it may appear redundant given pushLine and isStartLine.
5-15: DeduplicatestartPatternsandreasonPatterns.The two arrays are character-identical. A change to one (e.g., adding an
assert failed:pattern upstream) will easily forget the other and quietly desync start- vs reason-line detection.♻️ Proposed cleanup
-const startPatterns = [ +const crashPatterns = [ /Guru Meditation Error:/i, /panic'ed/i, /^Exception\s+\(\d+\):?/i, ] - -const reasonPatterns = [ - /Guru Meditation Error:/i, - /panic'ed/i, - /^Exception\s+\(\d+\):?/i, -] @@ -function isStartLine(line) { - return startPatterns.some((pattern) => pattern.test(line)) -} +function isStartLine(line) { + return crashPatterns.some((pattern) => pattern.test(line)) +} @@ -function isReasonLine(line) { - return reasonPatterns.some((pattern) => pattern.test(line)) -} +function isReasonLine(line) { + return crashPatterns.some((pattern) => pattern.test(line)) +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/capturer/framer.js` around lines 5 - 15, startPatterns and reasonPatterns are identical and should be deduplicated: create a single shared constant (e.g., basePatterns or commonPatterns) containing the regex list and assign both startPatterns and reasonPatterns to that shared constant (or export both as references to it), optionally freezing the array if immutability is desired; update usages to reference startPatterns and reasonPatterns as before so external code is unaffected.src/vendor/trbr/decode/regAddr.js (1)
162-175: Duplicate JSDoc block.The same
@param/@returnsdoc block is repeated immediately. Drop one copy.♻️ Proposed cleanup
-/** - * Normalize a parsed GDB line entry. If both file and lineNumber are missing or - * unknown, omit them. If either is present but unknown, default to '??'. - * - * `@param` {ParsedGDBLine} entry - * `@returns` {GDBLine | ParsedGDBLine} - */ /** * Normalize a parsed GDB line entry. If both file and lineNumber are missing or * unknown, omit them. If either is present but unknown, default to '??'. * * `@param` {ParsedGDBLine} entry * `@returns` {GDBLine | ParsedGDBLine} */ function normalizeParsedLine(entry) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/regAddr.js` around lines 162 - 175, Remove the duplicate JSDoc block that repeats the "Normalize a parsed GDB line entry" documentation; keep a single JSDoc comment containing the `@param` {ParsedGDBLine} entry and `@returns` {GDBLine | ParsedGDBLine} tags directly above the normalize function (the block describing file/lineNumber behavior) so there is only one docblock for that function.src/vendor/trbr/decode/regs.js (1)
36-36: TODO unresolved: verify RISC-V register layout againstgdbRegsInfoRiscvIlp32.The order/contents of
registerSets.riscvare consumed by the panic parser to map register dump fields by index — a mismatch with the actual GDB ILP32 register layout would produce silently-misnamed register values in decoded output. Please confirm the order matches before relying on it.Want me to open a tracking issue for this verification?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/regs.js` at line 36, registerSets.riscv's order is unverified and may not match the GDB ILP32 register layout used by gdbRegsInfoRiscvIlp32, which can mislabel registers when the panic parser indexes into it; compare the current registerSets.riscv array in src/vendor/trbr/decode/regs.js against the authoritative gdbRegsInfoRiscvIlp32 definition (field order and names), reorder/rename entries in registerSets.riscv to exactly match that layout, add a short comment citing gdbRegsInfoRiscvIlp32 as the source of truth, and add a unit test or a sanity-check in the panic parser that asserts the expected count and key indices to prevent regressions.src/vendor/trbr/decode/coredump.js (3)
320-320: Pass an explicit radix toparseInt.
parseInt(tid)is missing the radix argument. While modern JS engines default to base 10 for non‑0x/0ostrings, lint rules (radix) typically flag this. UseparseInt(tid, 10).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/coredump.js` at line 320, The parseInt call used to set coreId currently omits the radix (coreId: parseInt(tid)); update this to pass an explicit radix so it always parses decimal (e.g., coreId: parseInt(tid, 10)). Locate the assignment where coreId is derived from tid in the code around the coredump parsing logic and replace parseInt(tid) with parseInt(tid, 10) to satisfy the radix lint rule and ensure consistent parsing.
149-160: Dangling/leftover JSDoc block.The JSDoc at Lines 149–153 (
@param str,@param key,@returns string | undefined) does not document any function — it appears to be a leftover from whenextractMiListContentlived here before being moved togdbMi.js. It now sits directly above thedecodeCoredumpJSDoc and only adds confusion. Remove it.🧹 Proposed cleanup
-/** - * `@param` {string} str - * `@param` {string} key - * `@returns` {string | undefined} - */ /** * `@param` {DecodeCoredumpParams} params * `@param` {DecodeInputFileSource} input * `@param` {boolean} [tryRepair] * `@param` {import('./decode.js').DecodeOptions} [options={}] Default is `{}` * `@returns` {Promise<CoredumpDecodeResult>} */🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/coredump.js` around lines 149 - 160, Remove the stray JSDoc block that documents "@param {string} str", "@param {string} key", and "@returns {string | undefined}" which is a leftover for extractMiListContent and now sits immediately above the decodeCoredump JSDoc; simply delete that orphan comment block so only the correct JSDoc for decodeCoredump remains (verify the extractMiListContent reference is not needed in this file and that decodeCoredump's JSDoc starts immediately after the removed block).
38-83: ELF size estimate ignores section headers; assumes 32‑bit ELF.
elfTotalSize = max(p_offset + p_filesz)only covers program‑header‑mapped data; section headers and.symtab/.strtabtypically live past that range and are required for symbol resolution by GDB. Also, all reads usereadUInt32LE, which is only valid forEI_CLASS = ELFCLASS32. Consider:
- Reading
e_ident[4](offset+4) to confirmELFCLASS32before using these offsets.- Including
e_shoff + e_shnum * e_shentsizein the size estimate so the extracted ELF retains section data needed by GDB.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/coredump.js` around lines 38 - 83, In tryRawElfFallback: detect ELF class by reading e_ident[4] at offset+4 and only use the 32-bit readUInt32LE path when EI_CLASS === ELFCLASS32; for ELFCLASS64 use the 64-bit fields (readBigUInt64LE / readUInt32LE as appropriate) and adjust header offsets/sizes accordingly (e_phoff/e_shoff are 64-bit in ELF64 and program header fields p_offset/p_filesz are 64-bit). Also include section header space in the size estimate by computing shEnd = e_shoff + e_shentsize * e_shnum and taking elfTotalSize = max(maxEndFromProgramHeaders, shEnd). Reference: function tryRawElfFallback, variables e_ident, e_phoff, e_phentsize, e_phnum, e_shoff, e_shentsize, e_shnum, and the p_offset/p_filesz reads.src/vendor/trbr/decode/riscvPanicParse.js (1)
177-195: O(n²) buffer growth via repeatedBuffer.concat.
stackData = Buffer.concat([stackData, lineData])on every iteration reallocates the entire growing buffer. For longer stack dumps this is unnecessarily costly. Collect chunks into an array and concat once at the end.♻️ Proposed refactor
function getStackAddrAndData({ stackDump }) { let stackBaseAddr = 0 let baseAddr = 0 let bytesInLine = 0 - let stackData = Buffer.alloc(0) + /** `@type` {Buffer[]} */ + const chunks = [] stackDump.forEach((line) => { const prevBaseAddr = baseAddr baseAddr = line.baseAddr if (stackBaseAddr === 0) { stackBaseAddr = baseAddr } else if (baseAddr !== prevBaseAddr + bytesInLine) { throw new Error('Invalid base address') } const lineData = Buffer.concat( line.data.map((word) => { const buf = Buffer.alloc(4) buf.writeUInt32LE(word >>> 0) return buf }) ) bytesInLine = lineData.length - stackData = Buffer.concat([stackData, lineData]) + chunks.push(lineData) }) - return { stackBaseAddr, stackData } + return { stackBaseAddr, stackData: Buffer.concat(chunks) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/riscvPanicParse.js` around lines 177 - 195, The loop over stackDump currently grows stackData with repeated Buffer.concat (stackData = Buffer.concat([stackData, lineData])), causing O(n²) reallocations; change the implementation in the stackDump.forEach block to collect each lineData into an array (e.g., chunks) and track totalLength (use bytesInLine or sum lineData.length), preserve the existing baseAddr/stackBaseAddr/bytesInLine validation logic, then after the loop perform a single Buffer.concat(chunks, totalLength) to produce stackData once. Ensure variable names referenced (stackDump.forEach, stackData, lineData, bytesInLine, baseAddr, stackBaseAddr) are updated accordingly.src/vendor/trbr/decode/stringify.js (1)
104-118: Mutatingprocess.env.FORCE_COLORis a leaky side effect.Setting and (sometimes) deleting
process.env.FORCE_COLORfrom a formatter function is a global mutation that:
- Affects any other code in the same process that observes that env var.
- Races under concurrent
stringifyDecodeResultcalls (interleaved set/reset can leave the env var stuck on or stuck off).If
tinyrainbowCreateColors()accepts a "force color" argument or option, prefer passing it explicitly rather than poking the environment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/stringify.js` around lines 104 - 118, The code mutates process.env.FORCE_COLOR inside the options.color === 'force' branch which causes global side effects and races; update the logic in the stringify flow (the branch using options.color, tinyrainbowCreateColors(), create(), colorizeFn, resetColor) to avoid touching process.env: instead call tinyrainbowCreateColors with an explicit forceColor argument (or extend tinyrainbowCreateColors/create to accept a force flag) and create the colorizeFn using that flag, remove the process.env set/delete and the resetColor mutator, and return the colorizeFn (and a no-op reset if you must preserve the API) so callers get forced coloring without global env mutation.src/vendor/trbr/decode/riscv.js (1)
1321-1349: Nit:faultAddreads as a typo offaultAddr.Consistently used here, in the log call, and in the
createDecodeResultargument, so it's not a bug — but the name is mildly confusing next tofaultAddrfield on the result. Renaming tofaultAddr(or any non-truncated form) would help readability.♻️ Suggested rename
- const [stacktraceLines, [programCounter, faultAdd], globals] = + const [stacktraceLines, [programCounter, faultAddr], globals] = await Promise.all([ processPanicOutput(params, panicInfo, options, log), addr2line( params, [panicInfo.programCounter, panicInfo.faultAddr], options ), includeFrameVars ? resolveGlobalSymbols(params, options) : Promise.resolve([]), ]) if (!includeFrameVars) { log('skip globals/locals (includeFrameVars=false)') } - log('addr2line done', { programCounter, faultAdd }) + log('addr2line done', { programCounter, faultAddr }) log('globals count', globals.length) stacktraceLines.forEach((line, index) => { log('stacktrace line', index, line) }) return createDecodeResult( panicInfo, programCounter, - faultAdd, + faultAddr, stacktraceLines, globals )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/riscv.js` around lines 1321 - 1349, Rename the truncated variable faultAdd to faultAddr throughout the block: update the const destructuring that receives addr2line results, any subsequent uses (the log call 'addr2line done', the createDecodeResult invocation), and any references nearby (e.g., where programCounter and faultAdd are logged or returned); ensure the symbol is consistently named faultAddr and that functions referenced here (addr2line, processPanicOutput, resolveGlobalSymbols, createDecodeResult, includeFrameVars) continue to receive the same values but with the clearer faultAddr identifier.src/vendor/trbr/tool.js (1)
45-55: WrapJSON.parsewith a descriptive error so arduino-cli failures are diagnosable.If
arduino-cli board detailsever returns non-JSON (older versions, partial stderr leakage, network errors leaving an empty stdout, etc.), this throws a bareSyntaxError: Unexpected token …with no hint that the source was the arduino-cli invocation, making field debugging painful.♻️ Suggested fix
- const { stdout } = await execBoardDetails({ + const { stdout } = await execBoardDetails({ arduinoCliPath, fqbn, arduinoCliConfigPath, additionalUrls, signal: options?.signal, }) - const { build_properties } = JSON.parse(stdout) + let parsed + try { + parsed = JSON.parse(stdout) + } catch (err) { + throw new Error( + `Failed to parse 'arduino-cli board details' output for '${fqbn}': ${ + err instanceof Error ? err.message : String(err) + }` + ) + } + const { build_properties } = parsed return parseBuildProperties(build_properties)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/tool.js` around lines 45 - 55, Wrap the JSON.parse(stdout) call in a try/catch inside the function that calls execBoardDetails (the block that currently assigns const { build_properties } = JSON.parse(stdout)), catching any SyntaxError and rethrowing a new Error that clearly states the failure came from the arduino-cli "board details" invocation and includes the raw stdout (and error message); preserve the original error as cause if available. This change should be around the JSON.parse usage before calling parseBuildProperties(build_properties) so failures include stdout and a descriptive message referencing execBoardDetails and parseBuildProperties.src/vendor/trbr/decode/decode.js (2)
79-90: Address the dangling TODO and dead fallback ingetAddr.
if (!addrLocation) return undefinedalready filters falsy inputs at the top, soaddrLocation ?? '0'on line 86 can never use the'0'branch. Either drop it or remove the early return. Also worth resolving (or removing) the// TODO: is it needed?comment now that this is part of the vendored API surface — otherwise it will outlive the original repo's context.Want me to audit callers of
getAddrand propose either inlining or removal where appropriate?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/decode.js` around lines 79 - 90, Remove the dangling TODO and the dead fallback by keeping the early guard (if (!addrLocation) return undefined) and changing the parseInt call to not use (addrLocation ?? '0'); instead pass the raw addrLocation when not a GDB line and the regAddr when it is: use parseInt(isGDBLine(addrLocation) ? addrLocation.regAddr : addrLocation, 16). Also delete the "// TODO: is it needed?" comment and ensure references to AddrLocation, getAddr, isGDBLine, and regAddr are preserved.
463-465: UsestructuredCloneinstead ofJSON.parse(JSON.stringify(...)).Three instances use
JSON.parse(JSON.stringify(...))for deep cloning.structuredCloneis faster, built into the Node.js version bundled with VS Code 1.85.0+, and preserves more types (e.g.,undefinedproperties,Date). It also avoids silent data loss when fields likeBufferslip through.Suggested changes
- const copy = JSON.parse(JSON.stringify(locationAware)) + const copy = structuredClone(locationAware)- const copy = JSON.parse(JSON.stringify(locationAware)) + const copy = structuredClone(locationAware)- const copy = JSON.parse(JSON.stringify(result)) + const copy = structuredClone(result)Also applies to: lines 470-472, 637-638
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/decode.js` around lines 463 - 465, Replace the JSON.parse(JSON.stringify(...)) deep-clone calls with structuredClone to preserve types and avoid silent data loss: e.g., change const copy = JSON.parse(JSON.stringify(locationAware)) to const copy = structuredClone(locationAware) before mutating copy.location.file (used alongside fixWindowsPath). Apply the same replacement for the other two occurrences of JSON.parse(JSON.stringify(...)) in this file (the blocks that clone similar objects and then modify .location or related fields) so all clones use structuredClone.src/vendor/trbr/index.d.ts (1)
220-223: Addnameand narrowcodeto literal types inAbortErrordefinition.The implementation at
src/vendor/trbr/abort.jssetsthis.name = 'AbortError'andthis.code = 'ABORT_ERR', both as literal values. Updating the type definition to reflect these literals improves type safety for consumers who branch onerr.name === 'AbortError'orerr.code === 'ABORT_ERR'.Suggested change
export class AbortError extends Error { constructor(); - code: string; + name: 'AbortError'; + code: 'ABORT_ERR'; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/index.d.ts` around lines 220 - 223, Update the AbortError type declaration so its properties reflect the implementation: add a name property typed as the literal 'AbortError' and change code from string to the literal 'ABORT_ERR' on the exported AbortError class (keep the existing constructor signature); this ensures consumers can safely check err.name === 'AbortError' and err.code === 'ABORT_ERR'.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vendor/trbr/decode/addr2Line.js`:
- Around line 38-75: The child-process error/exit handlers only reject the
in-flight command and log, which leaves this.queue promises unresolved; update
the gdb.on('error', ...) and gdb.on('exit', ...) handlers to create and store a
terminal error on the session (e.g., sessionErr), immediately reject
this.current (if present) and drain this.queue by rejecting every queued item's
reject callback with that terminal error, and ensure exec() checks for the
terminal error and rejects fast instead of queueing when set; reference symbols:
this.current, this.queue, gdb.on('error', ...), gdb.on('exit', ...),
_processQueue(), exec(), addr2line(), and session.close().
- Around line 92-133: The start() promise attaches onData and onError listeners
to this.gdb/stdout/stderr but never removes them, causing duplicate buffering
(start()'s onData and the constructor's _onData both append the same chunk) and
stale listeners that can consume prompts; fix by removing these listeners (use
this.gdb.off/removeListener and the same handler references) in every resolve
and reject path inside start(), and also stop re-appending the chunk inside
start() (remove the this.buffer += chunk.toString() in the onData defined in
start()) so only _onData handles buffering; ensure onError also detaches the
listeners before rejecting.
In `@src/vendor/trbr/decode/coredump.js`:
- Around line 303-312: The stacktrace builder sets args with const args =
frameArgs[index]?.args || '' which yields a string when absent and breaks
stringify.js's location.args?.map; change this to default to an empty array
(FrameArg[]) so args is always an array (e.g., use frameArgs[index]?.args ||
[]), updating the stacktraceLines construction around btParsed/frameArgs to
ensure args is typed/initialized as FrameArg[] before being spread into the
stacktrace entry consumed by stringify.js.
- Around line 341-347: The fallback call to tryRawElfFallback currently drops
the third options argument (losing AbortSignal and debug), so update the
invocation to pass the original options through (call tryRawElfFallback(params,
raw, options)) and ensure tryRawElfFallback’s signature and its internal
recursive call to decodeCoredump propagate that same options object (preserving
options.signal and options.debug) so abort and tracing continue to work on the
repair path.
In `@src/vendor/trbr/decode/decode.js`:
- Around line 354-361: The stream-handling branch in isDecodeInputStreamSource
incorrectly overwrites input on each chunk (input = chunk.toString()), dropping
all but the last chunk; change decodeInput handling in that block to accumulate
chunks instead (collect Buffer chunks into an array and use Buffer.concat, or
append to a single buffer-safe accumulator) and then convert once to the final
input format, ensuring binary-safe concatenation for decodeInput.inputStream
consumption so multi-chunk streams are preserved during panic decoding.
- Around line 295-321: The finally block currently fires-and-forgets cleanup and
closes both fd and target which races with subsequent decodeCoredump reads and
can double-close the file descriptor; change it so the stream keeps ownership of
the fd (do not call fd.close()), await the stream closure and any errors before
returning (e.g., replace Promise.allSettled([...]).then(...) with an awaited
promise that waits for target?.close via a Promise wrapper and logs/rethrows
errors as needed), ensure you still call pipeline(decodeInput.inputStream,
target) and await it, and confirm decodeCoredump is only invoked after the
awaited cleanup; reference the variables and calls fd, target, pipeline, and
decodeCoredump to locate the code to modify.
- Around line 138-145: Replace the runtime require call in
isDecodeInputStreamSource with a top-level import of Readable from 'stream' and
use that imported Readable in the function (so change isDecodeInputStreamSource
to check arg.inputStream instanceof Readable); in the stream-reading loop (the
for-await block that currently does input = chunk.toString()) append each chunk
instead (input += chunk.toString()) so chunks are concatenated rather than
overwritten; and in decodeCoredump ensure the cleanup Promise returned by
Promise.allSettled(...) (and any fd.createWriteStream() usage) is awaited or the
write stream is finished before closing the underlying fd—do not call fd.close()
while the stream may still own the descriptor, instead await the stream
finish/close or the Promise.allSettled result so the coredump is fully flushed
before returning.
In `@src/vendor/trbr/decode/regAddr.js`:
- Around line 196-204: The code incorrectly assigns parsedMethod.name to the
lineNumber field (violating the GDBLine contract) and includes an unreachable
guard for '?? ()'; change the return so it does not set lineNumber to a
non-numeric method string — either omit lineNumber entirely when only a method
is available or add a proper method property (e.g., return { regAddr, method:
parsedMethod.name }) on the ParsedGDBLine/return shape instead; also remove the
unreachable check parsedMethod.name !== '?? ()' and keep the existing
parsedMethod.name !== '??' guard and the hasValidMethod/hasValidLine logic so
callers only see numeric lineNumber or '??', while method names are carried in a
dedicated field if needed.
In `@src/vendor/trbr/decode/riscv.js`:
- Around line 181-200: The current truthy check `if (regAddr &&
regNameValidator(regName))` drops registers with value 0; change the condition
to test parse success rather than truthiness (e.g., check that parseInt produced
a number and is not NaN) and still validate the name with
`regNameValidator(regName)`, then assign `currentRegDump.regs[regName] =
regAddr` and update `programCounter` for 'MEPC' and `faultCode`/`faultAddr` for
'MCAUSE'/'MTVAL' even when the parsed value is 0; update the conditional around
the parsing in the same block that iterates `regMatches` to ensure zero-valued
registers are recorded.
In `@src/vendor/trbr/decode/riscvPanicParse.js`:
- Around line 132-141: The current truthy check "if (regAddr &&
regNameValidator(regName))" drops valid zero-valued registers (e.g., X0); change
this to explicitly validate presence of a numeric regAddr instead of
truthiness—for example, require regNameValidator(regName) and that regAddr is a
number and not NaN (e.g., !Number.isNaN(regAddr) or typeof regAddr === 'number'
&& !Number.isNaN(regAddr'))—so that currentRegDump.regs[regName] is set for zero
values while preserving the special-case handling for MEPC, MCAUSE and MTVAL.
In `@src/vendor/trbr/decode/stringify.js`:
- Around line 10-19: The defaultOptions object contains an unused forceColor
property and the JSDoc for StringifyOptions misstates the lineSeparator default;
update defaultOptions to use the same option key that createColorFn reads
(rename/remove forceColor and add color: 'force' | 'disable' default as
appropriate) or make createColorFn read forceColor consistently, and correct the
JSDoc default for lineSeparator to '\r\n' so the typedef matches the actual
default; reference defaultOptions, createColorFn, and StringifyOptions when
making the edits.
- Around line 198-206: The code passes top = thread.result.stacktraceLines?.[0]
into stringifyAddrLocation which can be undefined for an empty stacktrace and
causes a TypeError; update the loop to guard/normalize top before calling
stringifyAddrLocation (e.g., check if thread.result.stacktraceLines &&
thread.result.stacktraceLines.length > 0, or provide a safe default location
object/null and have stringifyAddrLocation handle it) so stringifyAddrLocation
is never invoked with undefined; locate the usage in the loop that builds lines
(references: variable top, thread.result.stacktraceLines, and function
stringifyAddrLocation) and implement the guard/normalization there.
In `@src/vendor/trbr/decode/xtensa.js`:
- Around line 89-92: The code uses a falsy check on panicInfo.faultCode which
drops valid code 0; update the conditional around panicInfo.faultCode so it
checks for presence rather than truthiness (e.g., panicInfo.faultCode !==
undefined && panicInfo.faultCode !== null or typeof panicInfo.faultCode !==
'undefined') and then set faultMessage = exceptions[panicInfo.faultCode]; modify
the block around the faultMessage assignment (the panicInfo.faultCode check) to
use that explicit existence check so exceptions[0] ('Illegal instruction') is
preserved.
In `@src/vendor/trbr/index.d.ts`:
- Around line 79-92: Add the missing evaluateEvent callback to the
CapturerOptions type so callers can pass the same option used by the
implementation (reference: CapturerOptions and the evaluateEvent usage in
capturer.js), and remove or make the Capturer._eventBus field non-public in the
declaration (do not expose the internal EventEmitter as a public property) so
the class surface matches the implementation; update the interface/type for
CapturerOptions to include evaluateEvent: (evt: any) => boolean |
Promise<boolean> (or the correct signature used in capturer.js) and change the
Capturer class declaration so _eventBus is absent or marked private/internal
instead of typed as a public EventEmitter.
---
Nitpick comments:
In `@src/vendor/trbr/capturer/framer.js`:
- Around line 104-120: The hasSignal check in _finalize is redundant because
pushLine only creates this._active when isStartLine(line) is true, so update the
_finalize method to remove the unreachable hasSignal logic: delete the hasSignal
variable and change the guard to just check lines.length > 0 (i.e., if
(lines.length > 0) { finalized.push({...}) }), and remove any reference to
isStartLine in this function; alternatively, if you prefer defensive coding,
keep the hasSignal expression but add a short comment above it explaining it's a
defensive safeguard against future refactors and why it may appear redundant
given pushLine and isStartLine.
- Around line 5-15: startPatterns and reasonPatterns are identical and should be
deduplicated: create a single shared constant (e.g., basePatterns or
commonPatterns) containing the regex list and assign both startPatterns and
reasonPatterns to that shared constant (or export both as references to it),
optionally freezing the array if immutability is desired; update usages to
reference startPatterns and reasonPatterns as before so external code is
unaffected.
In `@src/vendor/trbr/decode/_tinyrainbow.js`:
- Around line 1-22: The stub currently always returns identity color functions
(palette/createColors/identity) which silently drop colorization if callers
switch to `force` or default modes; detect when FORCE_COLOR is set (e.g.
process.env.FORCE_COLOR truthy) at module load and emit a single runtime warning
(console.warn or processLogger.warn) indicating the vendored tinyrainbow is a
no-op and will not emit ANSI colors, while leaving createColors, palette and
identity behavior unchanged so existing callers continue to work; this preserves
the current API (createColors, default export palette) but makes the silent
degradation observable.
In `@src/vendor/trbr/decode/coredump.js`:
- Line 320: The parseInt call used to set coreId currently omits the radix
(coreId: parseInt(tid)); update this to pass an explicit radix so it always
parses decimal (e.g., coreId: parseInt(tid, 10)). Locate the assignment where
coreId is derived from tid in the code around the coredump parsing logic and
replace parseInt(tid) with parseInt(tid, 10) to satisfy the radix lint rule and
ensure consistent parsing.
- Around line 149-160: Remove the stray JSDoc block that documents "@param
{string} str", "@param {string} key", and "@returns {string | undefined}" which
is a leftover for extractMiListContent and now sits immediately above the
decodeCoredump JSDoc; simply delete that orphan comment block so only the
correct JSDoc for decodeCoredump remains (verify the extractMiListContent
reference is not needed in this file and that decodeCoredump's JSDoc starts
immediately after the removed block).
- Around line 38-83: In tryRawElfFallback: detect ELF class by reading
e_ident[4] at offset+4 and only use the 32-bit readUInt32LE path when EI_CLASS
=== ELFCLASS32; for ELFCLASS64 use the 64-bit fields (readBigUInt64LE /
readUInt32LE as appropriate) and adjust header offsets/sizes accordingly
(e_phoff/e_shoff are 64-bit in ELF64 and program header fields p_offset/p_filesz
are 64-bit). Also include section header space in the size estimate by computing
shEnd = e_shoff + e_shentsize * e_shnum and taking elfTotalSize =
max(maxEndFromProgramHeaders, shEnd). Reference: function tryRawElfFallback,
variables e_ident, e_phoff, e_phentsize, e_phnum, e_shoff, e_shentsize, e_shnum,
and the p_offset/p_filesz reads.
In `@src/vendor/trbr/decode/decode.js`:
- Around line 79-90: Remove the dangling TODO and the dead fallback by keeping
the early guard (if (!addrLocation) return undefined) and changing the parseInt
call to not use (addrLocation ?? '0'); instead pass the raw addrLocation when
not a GDB line and the regAddr when it is: use parseInt(isGDBLine(addrLocation)
? addrLocation.regAddr : addrLocation, 16). Also delete the "// TODO: is it
needed?" comment and ensure references to AddrLocation, getAddr, isGDBLine, and
regAddr are preserved.
- Around line 463-465: Replace the JSON.parse(JSON.stringify(...)) deep-clone
calls with structuredClone to preserve types and avoid silent data loss: e.g.,
change const copy = JSON.parse(JSON.stringify(locationAware)) to const copy =
structuredClone(locationAware) before mutating copy.location.file (used
alongside fixWindowsPath). Apply the same replacement for the other two
occurrences of JSON.parse(JSON.stringify(...)) in this file (the blocks that
clone similar objects and then modify .location or related fields) so all clones
use structuredClone.
In `@src/vendor/trbr/decode/regAddr.js`:
- Around line 162-175: Remove the duplicate JSDoc block that repeats the
"Normalize a parsed GDB line entry" documentation; keep a single JSDoc comment
containing the `@param` {ParsedGDBLine} entry and `@returns` {GDBLine |
ParsedGDBLine} tags directly above the normalize function (the block describing
file/lineNumber behavior) so there is only one docblock for that function.
In `@src/vendor/trbr/decode/regs.js`:
- Line 36: registerSets.riscv's order is unverified and may not match the GDB
ILP32 register layout used by gdbRegsInfoRiscvIlp32, which can mislabel
registers when the panic parser indexes into it; compare the current
registerSets.riscv array in src/vendor/trbr/decode/regs.js against the
authoritative gdbRegsInfoRiscvIlp32 definition (field order and names),
reorder/rename entries in registerSets.riscv to exactly match that layout, add a
short comment citing gdbRegsInfoRiscvIlp32 as the source of truth, and add a
unit test or a sanity-check in the panic parser that asserts the expected count
and key indices to prevent regressions.
In `@src/vendor/trbr/decode/riscv.js`:
- Around line 1321-1349: Rename the truncated variable faultAdd to faultAddr
throughout the block: update the const destructuring that receives addr2line
results, any subsequent uses (the log call 'addr2line done', the
createDecodeResult invocation), and any references nearby (e.g., where
programCounter and faultAdd are logged or returned); ensure the symbol is
consistently named faultAddr and that functions referenced here (addr2line,
processPanicOutput, resolveGlobalSymbols, createDecodeResult, includeFrameVars)
continue to receive the same values but with the clearer faultAddr identifier.
In `@src/vendor/trbr/decode/riscvPanicParse.js`:
- Around line 177-195: The loop over stackDump currently grows stackData with
repeated Buffer.concat (stackData = Buffer.concat([stackData, lineData])),
causing O(n²) reallocations; change the implementation in the stackDump.forEach
block to collect each lineData into an array (e.g., chunks) and track
totalLength (use bytesInLine or sum lineData.length), preserve the existing
baseAddr/stackBaseAddr/bytesInLine validation logic, then after the loop perform
a single Buffer.concat(chunks, totalLength) to produce stackData once. Ensure
variable names referenced (stackDump.forEach, stackData, lineData, bytesInLine,
baseAddr, stackBaseAddr) are updated accordingly.
In `@src/vendor/trbr/decode/stringify.js`:
- Around line 104-118: The code mutates process.env.FORCE_COLOR inside the
options.color === 'force' branch which causes global side effects and races;
update the logic in the stringify flow (the branch using options.color,
tinyrainbowCreateColors(), create(), colorizeFn, resetColor) to avoid touching
process.env: instead call tinyrainbowCreateColors with an explicit forceColor
argument (or extend tinyrainbowCreateColors/create to accept a force flag) and
create the colorizeFn using that flag, remove the process.env set/delete and the
resetColor mutator, and return the colorizeFn (and a no-op reset if you must
preserve the API) so callers get forced coloring without global env mutation.
In `@src/vendor/trbr/exec.js`:
- Around line 12-25: The exec helper (exported function exec) currently calls
execFile without an explicit maxBuffer, risking
ERR_CHILD_PROCESS_STDIO_MAXBUFFER for large stdout/stderr; update exec to set a
sensible default maxBuffer (e.g., 10 * 1024 * 1024) and merge it with any
user-provided options so callers can still override it, or alternatively add a
JSDoc comment above exec documenting the implicit ~1MB default and instructing
callers to pass options.maxBuffer to override—refer to the exec function and the
execFile call when applying the change.
In `@src/vendor/trbr/index.d.ts`:
- Around line 220-223: Update the AbortError type declaration so its properties
reflect the implementation: add a name property typed as the literal
'AbortError' and change code from string to the literal 'ABORT_ERR' on the
exported AbortError class (keep the existing constructor signature); this
ensures consumers can safely check err.name === 'AbortError' and err.code ===
'ABORT_ERR'.
In `@src/vendor/trbr/tool.js`:
- Around line 45-55: Wrap the JSON.parse(stdout) call in a try/catch inside the
function that calls execBoardDetails (the block that currently assigns const {
build_properties } = JSON.parse(stdout)), catching any SyntaxError and
rethrowing a new Error that clearly states the failure came from the arduino-cli
"board details" invocation and includes the raw stdout (and error message);
preserve the original error as cause if available. This change should be around
the JSON.parse usage before calling parseBuildProperties(build_properties) so
failures include stdout and a descriptive message referencing execBoardDetails
and parseBuildProperties.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4cd33ae0-67ad-4184-b319-5bebc0ec501c
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (28)
esbuild.jspackage.jsonsrc/crashDecoder.tssrc/vendor/trbr/abort.jssrc/vendor/trbr/capturer/capturer.jssrc/vendor/trbr/capturer/framer.jssrc/vendor/trbr/capturer/lineDecoder.jssrc/vendor/trbr/capturer/types.jssrc/vendor/trbr/decode/_tinyrainbow.jssrc/vendor/trbr/decode/addr2Line.jssrc/vendor/trbr/decode/coredump.jssrc/vendor/trbr/decode/decode.jssrc/vendor/trbr/decode/decode.text.jssrc/vendor/trbr/decode/decodeParams.jssrc/vendor/trbr/decode/gdbMi.jssrc/vendor/trbr/decode/globals.jssrc/vendor/trbr/decode/regAddr.jssrc/vendor/trbr/decode/regs.jssrc/vendor/trbr/decode/riscv.jssrc/vendor/trbr/decode/riscvPanicParse.jssrc/vendor/trbr/decode/stringify.jssrc/vendor/trbr/decode/xtensa.jssrc/vendor/trbr/decode/xtensaPanicParse.jssrc/vendor/trbr/exec.jssrc/vendor/trbr/index.d.tssrc/vendor/trbr/index.jssrc/vendor/trbr/os.jssrc/vendor/trbr/tool.js
💤 Files with no reviewable changes (1)
- esbuild.js
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/vendor/trbr/decode/riscvPanicParse.js (1)
175-206: Minor:stackBaseAddr === 0sentinel conflates "unset" with "first line at addr 0".If the first stack line ever started at address
0(it won't on ESP targets in practice, but the parser doesn't enforce that),stackBaseAddrwould stay0and the contiguity check would be skipped on every subsequent line. Tracking initialization with an explicit boolean is safer and clearer.♻️ Proposed refactor
-function getStackAddrAndData({ stackDump }) { - let stackBaseAddr = 0 - let baseAddr = 0 - let bytesInLine = 0 +function getStackAddrAndData({ stackDump }) { + let stackBaseAddr = 0 + let baseAddr = 0 + let bytesInLine = 0 + let initialized = false /** `@type` {Buffer[]} */ const chunks = [] let totalLength = 0 stackDump.forEach((line) => { const prevBaseAddr = baseAddr baseAddr = line.baseAddr - if (stackBaseAddr === 0) { + if (!initialized) { stackBaseAddr = baseAddr + initialized = true } else if (baseAddr !== prevBaseAddr + bytesInLine) { throw new Error('Invalid base address') }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/riscvPanicParse.js` around lines 175 - 206, The function getStackAddrAndData uses stackBaseAddr === 0 as a sentinel which conflates a valid address 0 with "unset"; change this by adding an explicit boolean (e.g., stackBaseAddrSet or sawFirstLine) to track whether the base has been initialized, set it when processing the first line, and use that flag instead of comparing stackBaseAddr to 0 when deciding to set stackBaseAddr or perform the contiguity check; update any logic referencing stackBaseAddr initialization while keeping the rest of getStackAddrAndData (baseAddr, prevBaseAddr, bytesInLine, chunks) unchanged.src/vendor/trbr/decode/xtensa.js (1)
17-21: Nit:TRBR_DEBUGonly enables logging on the exact string'true'.Common conventions like
TRBR_DEBUG=1,TRBR_DEBUG=yes, orTRBR_DEBUG=TRUEwill silently be treated as off. Consider broadening the check.♻️ Proposed tweak
- const writer = - debug ?? (process.env.TRBR_DEBUG === 'true' ? console.log : undefined) + const debugEnv = (process.env.TRBR_DEBUG ?? '').toLowerCase() + const envEnabled = debugEnv === 'true' || debugEnv === '1' || debugEnv === 'yes' + const writer = debug ?? (envEnabled ? console.log : undefined)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/xtensa.js` around lines 17 - 21, The TRBR_DEBUG check in createXtensaLogger treats only the exact string 'true' as enabled; update the environment check inside createXtensaLogger (in xtensa.js) so it treats common truthy values case-insensitively (e.g. '1', 'true', 'yes') as enabling logging; adjust the expression that sets writer (the debug ?? (process.env.TRBR_DEBUG === 'true' ? console.log : undefined) line) to test process.env.TRBR_DEBUG against a case-insensitive truthy pattern (or normalize and compare) so writer is enabled for those values while still remaining undefined for falsy values.src/vendor/trbr/decode/coredump.js (1)
75-79: Use the debug logger for cleanup warnings instead ofconsole.warn.The rest of the file routes diagnostics through
createCoredumpLogger(respectingoptions.debugandTRBR_DEBUG). Hard‑codingconsole.warnhere bypasses that and writes to stderr unconditionally, which is noisy for callers that intentionally suppressed debug output.♻️ Proposed refactor
-async function tryRawElfFallback(params, raw, options) { +async function tryRawElfFallback(params, raw, options) { + const log = createCoredumpLogger(options?.debug) @@ - .catch((err) => console.warn('Failed to clean up temp dir:', err)) + .catch((err) => log('failed to clean up temp dir', err))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/coredump.js` around lines 75 - 79, Replace the hard-coded console.warn in the cleanup finally block with the module's debug-aware logger: obtain the logger via createCoredumpLogger(options) (or reuse the existing logger instance if already created in the surrounding scope) and call logger.warn (or logger.debug) instead of console.warn so the cleanup warning respects options.debug / TRBR_DEBUG; ensure the logger is in scope for the finally block and remove the console.warn call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vendor/trbr/decode/coredump.js`:
- Around line 149-161: The JSDoc for decodeCoredump lists tryRepair as the 3rd
param and options as 4th, but the function signature is decodeCoredump(params,
input, options = {}, tryRepair = true); update the JSDoc parameter order to
match the implementation (declare options as the 3rd parameter and tryRepair as
the 4th) and ensure the types/comments reference the same names used by the
function (e.g., DecodeOptions for options and boolean for tryRepair); this keeps
calls such as the one from tryRawElfFallback consistent with
decodeCoredump(params, input, options, false).
- Around line 313-322: Normalize the program counter to a hex string and avoid
calling toHexString on undefined: compute a programCounterHex (using
toHexString(programCounter) only if programCounter !== undefined, otherwise a
safe sentinel like '??' or null), then set result.faultInfo.programCounter.addr
to that hex string and use the same programCounterHex for the fallback regAddr
in the stacktraceLines[0] branch; update references to programCounter,
toHexString, stacktraceLines and result.faultInfo.programCounter so both addr
and regAddr are consistently hex strings and no toHexString(undefined) can
occur.
In `@src/vendor/trbr/decode/riscvPanicParse.js`:
- Around line 70-94: The JSDoc treats the runtime array gdbRegsInfoRiscvIlp32 as
a type: change the top-level type of gdbRegsInfo to use the element type of that
array (e.g. Record<RiscvTargetArch, (typeof gdbRegsInfoRiscvIlp32)[number][]>)
instead of gdbRegsInfoRiscvIlp32, and update the predicate return annotation in
createRegNameValidator to use regName is (typeof gdbRegsInfoRiscvIlp32)[number]
(not gdbRegsInfoRiscvIlp32) so the validator narrows to an array element type;
alternatively add a typedef alias like /** `@typedef` {(typeof
gdbRegsInfoRiscvIlp32)[number]} RegName */ and reference RegName in both places
(and keep the existing cast on the includes call).
---
Nitpick comments:
In `@src/vendor/trbr/decode/coredump.js`:
- Around line 75-79: Replace the hard-coded console.warn in the cleanup finally
block with the module's debug-aware logger: obtain the logger via
createCoredumpLogger(options) (or reuse the existing logger instance if already
created in the surrounding scope) and call logger.warn (or logger.debug) instead
of console.warn so the cleanup warning respects options.debug / TRBR_DEBUG;
ensure the logger is in scope for the finally block and remove the console.warn
call.
In `@src/vendor/trbr/decode/riscvPanicParse.js`:
- Around line 175-206: The function getStackAddrAndData uses stackBaseAddr === 0
as a sentinel which conflates a valid address 0 with "unset"; change this by
adding an explicit boolean (e.g., stackBaseAddrSet or sawFirstLine) to track
whether the base has been initialized, set it when processing the first line,
and use that flag instead of comparing stackBaseAddr to 0 when deciding to set
stackBaseAddr or perform the contiguity check; update any logic referencing
stackBaseAddr initialization while keeping the rest of getStackAddrAndData
(baseAddr, prevBaseAddr, bytesInLine, chunks) unchanged.
In `@src/vendor/trbr/decode/xtensa.js`:
- Around line 17-21: The TRBR_DEBUG check in createXtensaLogger treats only the
exact string 'true' as enabled; update the environment check inside
createXtensaLogger (in xtensa.js) so it treats common truthy values
case-insensitively (e.g. '1', 'true', 'yes') as enabling logging; adjust the
expression that sets writer (the debug ?? (process.env.TRBR_DEBUG === 'true' ?
console.log : undefined) line) to test process.env.TRBR_DEBUG against a
case-insensitive truthy pattern (or normalize and compare) so writer is enabled
for those values while still remaining undefined for falsy values.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 83c33e0b-3cf7-4e4f-8b37-8abe676c5193
📒 Files selected for processing (7)
src/vendor/trbr/decode/coredump.jssrc/vendor/trbr/decode/decode.jssrc/vendor/trbr/decode/regAddr.jssrc/vendor/trbr/decode/riscv.jssrc/vendor/trbr/decode/riscvPanicParse.jssrc/vendor/trbr/decode/stringify.jssrc/vendor/trbr/decode/xtensa.js
✅ Files skipped from review due to trivial changes (1)
- src/vendor/trbr/decode/decode.js
🚧 Files skipped from review as they are similar to previous changes (3)
- src/vendor/trbr/decode/regAddr.js
- src/vendor/trbr/decode/riscv.js
- src/vendor/trbr/decode/stringify.js
|
@coderabbitai |
|
Tip For best results, initiate chat on the files or code changes.
✏️ Learnings added
|
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vendor/trbr/decode/addr2Line.js`:
- Around line 239-240: The helper currently fabricates a fake AddrLine ({
location: '??' }) when an input address is missing; instead, preserve missing
addresses by returning undefined for those entries and widen the JSDoc return
type to reflect that. Locate the function building AddrLine entries (the branch
that returns { location: '??' } when an address is falsy) and change it to
return undefined for missing inputs; update the JSDoc return from
Promise<AddrLine[]> to Promise<(AddrLine|undefined)[]> (or
Promise<Array<AddrLine|undefined>>) and apply the same change to the other
similar branch so callers (e.g., riscv.js passing [panicInfo.programCounter,
panicInfo.faultAddr]) receive undefined rather than a fabricated record.
- Around line 105-173: The start() Promise can hang if GDB's initial
banner/prompt was already appended to this.buffer by the constructor's _onData
before start() registers onData; after attaching this.gdb.on('error', onError)
and stdout/stderr .on('data', onData), immediately run the same inspection logic
that onData uses (check this.buffer for noSuchFileOrDirectory,
notExecutableFormat, fileFormatNotRecognized, and prompt) so already-buffered
startup output is consumed; reuse the onData checks (or call the onData handler
once) to reject with the same Error instances or resolve when prompt is present,
then return so start() does not hang.
In `@src/vendor/trbr/decode/riscv.js`:
- Around line 87-95: The decoder currently registered under riscvDecoders
(esp32c2/esp32c3/esp32c6/esp32h2/esp32h4/esp32p4 -> decodeRiscv) aborts when the
input contains multiple core dumps; change decodeRiscv to handle multi-core
inputs by selecting the active/crashing core (look for common markers like
"current core", "core N", or the panic header) and decode only that core, or if
you cannot reliably pick one, have decodeRiscv explicitly skip/return undefined
for multi-core inputs so the path is treated as unsupported instead of
hard-failing. Ensure the selection logic is deterministic and documented in the
decodeRiscv function comments.
- Around line 336-354: The start promise for GdbServer (around the
server.listen(0) block) currently only resolves on 'listening' or rejects on
abort, so if server.listen emits an 'error' the await never settles; add a
one-shot server.once('error', errorHandler) that rejects the promise with the
received error and closes the server, and factor a cleanup function that removes
the 'abort' listener, the 'listening' listener, and the 'error' listener; ensure
abortHandler, the 'listening' callback, and the errorHandler all call that
cleanup before resolving/rejecting so no listeners leak.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 00cfca1e-b8d3-4f7f-8641-34abbaf5e7fe
📒 Files selected for processing (7)
src/vendor/trbr/capturer/framer.jssrc/vendor/trbr/decode/addr2Line.jssrc/vendor/trbr/decode/decodeParams.jssrc/vendor/trbr/decode/riscv.jssrc/vendor/trbr/index.d.tssrc/vendor/trbr/index.jssrc/vendor/trbr/targets.js
🚧 Files skipped from review as they are similar to previous changes (3)
- src/vendor/trbr/decode/decodeParams.js
- src/vendor/trbr/capturer/framer.js
- src/vendor/trbr/index.js
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/vendor/trbr/decode/riscv.js (2)
287-296: Minor: redundant ternary expression.Line 290 has
regDumps.length > 1 ? regDumps[0] : regDumps[0]— both branches return the same value. This appears to be leftover from addressing the multi-core review comment; the ternary can be simplified to justregDumps[0].♻️ Suggested simplification
- const activeCore = regDumps.length > 1 ? regDumps[0] : regDumps[0] + const activeCore = regDumps[0]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/riscv.js` around lines 287 - 296, The assignment to activeCore uses a redundant ternary (regDumps.length > 1 ? regDumps[0] : regDumps[0]); replace it with a direct selection const activeCore = regDumps[0]; (referencing regDumps and activeCore) to simplify the code and preserve existing behavior; optionally ensure callers handle an empty regDumps if needed.
51-85: Consider deduplicating the register list.
gdbRegsInfoRiscvIlp32is defined identically in bothriscv.jsandriscvPanicParse.js. While this doesn't cause functional issues, consider exporting from one location to avoid drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vendor/trbr/decode/riscv.js` around lines 51 - 85, gdbRegsInfoRiscvIlp32 is duplicated; remove the duplicate by exporting the single canonical constant from one module (keep the /** `@type` {const} */ annotation) and import it where needed instead of redefining it; specifically, export gdbRegsInfoRiscvIlp32 from the module that currently defines it (e.g., riscv.js) and update the other module (e.g., riscvPanicParse.js) to import and use that exported symbol, deleting the redundant local definition and adjusting any references to use the imported name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/vendor/trbr/decode/riscv.js`:
- Around line 287-296: The assignment to activeCore uses a redundant ternary
(regDumps.length > 1 ? regDumps[0] : regDumps[0]); replace it with a direct
selection const activeCore = regDumps[0]; (referencing regDumps and activeCore)
to simplify the code and preserve existing behavior; optionally ensure callers
handle an empty regDumps if needed.
- Around line 51-85: gdbRegsInfoRiscvIlp32 is duplicated; remove the duplicate
by exporting the single canonical constant from one module (keep the /** `@type`
{const} */ annotation) and import it where needed instead of redefining it;
specifically, export gdbRegsInfoRiscvIlp32 from the module that currently
defines it (e.g., riscv.js) and update the other module (e.g.,
riscvPanicParse.js) to import and use that exported symbol, deleting the
redundant local definition and adjusting any references to use the imported
name.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9e9b7fbc-e52a-4174-8d8b-a407d66b55e5
📒 Files selected for processing (7)
src/vendor/trbr/decode/addr2Line.jssrc/vendor/trbr/decode/coredump.jssrc/vendor/trbr/decode/decode.jssrc/vendor/trbr/decode/regs.jssrc/vendor/trbr/decode/riscv.jssrc/vendor/trbr/decode/riscvPanicParse.jssrc/vendor/trbr/targets.js
✅ Files skipped from review due to trivial changes (2)
- src/vendor/trbr/targets.js
- src/vendor/trbr/decode/regs.js
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/test/crashDecoder.test.ts (1)
708-716: Avoid hardcoding full register order in test data.This duplicates the canonical list and is easy to drift. Prefer importing the canonical register sequence directly and asserting against that.
♻️ Suggested refactor
import { registerSets } from '../vendor/trbr/decode/regs.js'; +import { gdbRegsInfoRiscvIlp32 } from '../vendor/trbr/decode/riscvPanicParse.js'; @@ it('matches the expected GDB ILP32 register order', () => { - // This must match gdbRegsInfoRiscvIlp32 in riscvPanicParse.js - const expectedOrder = [ - 'X0', 'RA', 'SP', 'GP', 'TP', 'T0', 'T1', 'T2', 'S0/FP', 'S1', - 'A0', 'A1', 'A2', 'A3', 'A4', 'A5', 'A6', 'A7', - 'S2', 'S3', 'S4', 'S5', 'S6', 'S7', 'S8', 'S9', 'S10', 'S11', - 'T3', 'T4', 'T5', 'T6', 'MEPC', - ]; - expect(registerSets.riscv).toEqual(expectedOrder); + expect(registerSets.riscv).toEqual(gdbRegsInfoRiscvIlp32); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/test/crashDecoder.test.ts` around lines 708 - 716, The test currently hardcodes the full expected GDB ILP32 register order; instead import the canonical sequence (gdbRegsInfoRiscvIlp32) from riscvPanicParse.js and use that as the expected value so the test doesn't drift—replace the local expectedOrder array in the test with an import of gdbRegsInfoRiscvIlp32 and assert expect(registerSets.riscv).toEqual(gdbRegsInfoRiscvIlp32).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vendor/trbr/capturer/framer.js`:
- Around line 81-86: The flush() logic currently skips finalizing an active
crash block when isCompleteBlock(this._active.lines) is false, which can drop
real crash data; update the flush() code so that if this._active exists you call
this._finalize(finalized) unconditionally (relying on _finalize's hasSignal
protection) instead of gating on isCompleteBlock; reference the symbols flush,
this._active, isCompleteBlock, and _finalize when making the change.
In `@src/vendor/trbr/decode/riscv.js`:
- Around line 252-265: parsePanicOutput() selects the first core via
activeCore/regDumps but still passes the full aggregated stackDump into
getStackAddrAndData(), which can throw "Invalid base address" when other cores
contributed non-contiguous segments; fix by filtering the stackDump to only the
segments that belong to the selected core (use activeCore.coreId or activeCore
identifier) before calling getStackAddrAndData({ stackDump }), and apply the
same filtering wherever the function calls getStackAddrAndData() (e.g., the
earlier path handling regDumps) so only the chosen core's stack segments are
checked for contiguity.
- Around line 1080-1082: The cleanup in the finally block currently does "await
client.sendCommand(`-var-delete ${varObject}`)" which can throw and reject the
whole flow; change it to a best-effort delete by wrapping the call in its own
try/catch so errors are caught and not rethrown (optionally log the error) —
locate the finally block that calls client.sendCommand with varObject and
replace the direct await with a try { await client.sendCommand(...)} catch (err)
{ /* swallow or log */ } so delete failures don't mask successful decoding.
---
Nitpick comments:
In `@src/test/crashDecoder.test.ts`:
- Around line 708-716: The test currently hardcodes the full expected GDB ILP32
register order; instead import the canonical sequence (gdbRegsInfoRiscvIlp32)
from riscvPanicParse.js and use that as the expected value so the test doesn't
drift—replace the local expectedOrder array in the test with an import of
gdbRegsInfoRiscvIlp32 and assert
expect(registerSets.riscv).toEqual(gdbRegsInfoRiscvIlp32).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 78430225-cfc5-4922-8f28-053c8fdd80a3
📒 Files selected for processing (6)
src/crashDecoder.tssrc/test/crashDecoder.test.tssrc/vendor/trbr/capturer/capturer.jssrc/vendor/trbr/capturer/framer.jssrc/vendor/trbr/decode/riscv.jssrc/vendor/trbr/decode/riscvPanicParse.js
🚧 Files skipped from review as they are similar to previous changes (2)
- src/vendor/trbr/decode/riscvPanicParse.js
- src/vendor/trbr/capturer/capturer.js
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
Summary by CodeRabbit
New Features
Chores