Conversation
Added testOpFloorEvalNegative covering floor(-1), floor(-1.1)=-2, floor(-0.5)=-1, floor(-1.5)=-2, floor(-2), floor(-2.5)=-3. Uses float equality comparison for fractional cases where floor's internal normalization produces different bytes than packLossless. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Added testOpInvEvalDivisionByZero verifying inv(0) reverts with DivisionByZero(1e76, -76). Added testOpInvEvalNegative covering inv(-1)=-1 and inv(-2)=-0.5. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…Selector Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ce, parseInterstitial Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
24 tests covering all tryParseLiteral dispatch paths (decimal, hex, string, sub-parseable with mocked sub-parser, unrecognized), value correctness, cursor advancement, edge cases (empty string, negative zero, max/min hex, uppercase 0X), and parseLiteral happy/revert paths. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tests uppercase, lowercase, mixed-case, and alternating-case hex digit parsing to verify all three nybble conversion branches (0-9, a-f, A-F). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- A36-6: Add InvalidRepeatCount selector to integration tests - A37-1: Add fuzz test for parseString memory snapshot restoration - A37-3: Add concrete and fuzz tests for UnclosedStringLiteral at end boundary - A35-5: Mixed-case hex tests committed in previous commit Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Adds direct unit tests for previously untested functions: - LibSubParse: subParseWords, subParseLiteral, consumeSubParseWordInputData, consumeSubParseLiteralInputData, constant accumulation - LibParseState: buildConstants, pushLiteral - Rainterpreter: eval4 non-zero sourceIndex, stateOverlay multiple/duplicate keys - RainterpreterStore: Set event emission, empty set, get uninitialized, overwrite key - RainterpreterParser: parsePragma1 empty input - RainterpreterReferenceExtern: SubParserIndexOutOfBounds error path - RainterpreterDISPaiRegistry: ERC165 support and test Also includes code quality fixes: ERC165 on DISPaiRegistry, 0x3f comment in LibParseState, and triage updates (615/640 findings resolved, 96%). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…iage - A01-3: Move SubParserIndexOutOfBounds from BaseRainterpreterSubParser to ErrSubParse.sol - A06-3: Hoist float one out of loop in LibExternOpIntInc.run - A06-4: Add MAX_REPEAT_LITERAL_LENGTH constant in LibParseLiteralRepeat - Update test import for SubParserIndexOutOfBounds - Triage remaining Solidity pass 4 findings (A16-1 dismissed, A30-4 dismissed) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add commented forge-lint disable directives explaining why each cast is safe (ASCII chars fit in bytes1, opcode constants fit in uint8, modulo results bounded, string literals fit in bytes32). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- R01-3/R03-4/R03-10: Fix homepage URLs, use workspace refs for edition/license/homepage
- R01-4: Preserve error chains with anyhow!(e) instead of anyhow!("{:?}", e)
- R02-2/R02-3: Remove redundant .to_owned().deref().clone(), unify trace ordering with .rev()
- R02-5: Convert CreateNamespace empty struct to free function
- R02-6: Fix "commiting" typos to "committing"
- R02-7/R02-11: Use .values() instead of clippy suppression, idiomatic .ok_or()
- R02-8: Replace .unwrap() with ? for error propagation in add_or_select
- R02-9: Document empty stack/writes on TryFrom<RawCallResult>
- R02-10: Move doc comments before #[derive] attributes
- R03-1/R03-2: Remove unused serde/serde_json dependencies
- R03-6/R03-9: Fix DISPaiR doc comment, add Debug derive
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughWorkspace metadata and crate manifests were moved to workspace-scoped keys; eval/fork/namespace/trace data structures and error handling were adjusted; a centralized SubParser error was added and imported; ERC165 support and override modifiers were added in Solidity contracts; several library micro-optimizations were applied; a large suite of unit tests was added or expanded. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (8)
test/src/lib/op/math/LibOpSub.t.sol (1)
110-122: Add NatSpec comments for consistency with the rest of the file.All existing test functions in this file have NatSpec comments documenting their purpose. The new test functions should follow the same pattern to maintain consistency and meet audit documentation requirements.
📝 Proposed NatSpec comments
+ /// Test the eval of `sub` opcode parsed from a string. Tests zero outputs. function testOpSubZeroOutputs() external { checkBadOutputs(": sub(1 1);", 2, 1, 0); } + /// Test the eval of `sub` opcode parsed from a string. Tests two outputs. function testOpSubTwoOutputs() external { checkBadOutputs("_ _: sub(1 1);", 2, 1, 2); } + /// Test the eval of `sub` opcode parsed from a string. Tests that two + /// operands are disallowed. function testOpSubEvalTwoOperandsDisallowed() external { checkUnhappyParse("_: sub<0 0>(1 1);", abi.encodeWithSelector(UnexpectedOperandValue.selector)); checkUnhappyParse("_: sub<0 1>(1 1);", abi.encodeWithSelector(UnexpectedOperandValue.selector)); checkUnhappyParse("_: sub<1 0>(1 1);", abi.encodeWithSelector(UnexpectedOperandValue.selector)); }Based on learnings: "New code must meet the same audit requirements defined in /audit skill including tested error paths, complete NatSpec."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/lib/op/math/LibOpSub.t.sol` around lines 110 - 122, Add missing NatSpec comments above the three new test functions to match the file's existing documentation pattern: place a short `@dev/`@notice comment for testOpSubZeroOutputs describing it verifies the sub operator with zero outputs calls checkBadOutputs(": sub(1 1);", 2, 1, 0), for testOpSubTwoOutputs describing it verifies two output placeholders with checkBadOutputs("_ _: sub(1 1);", 2, 1, 2), and for testOpSubEvalTwoOperandsDisallowed describing it asserts disallowed operand-eval combos by calling checkUnhappyParse with UnexpectedOperandValue for sub<0 0>, sub<0 1>, and sub<1 0>; follow the exact NatSpec style used elsewhere in the file (same tags, phrasing and placement immediately above each function)..gitignore (1)
14-14: Consider anchoring the ignore rule to clarify scope (Line 14).The
.fixes/pattern is unanchored and will match any directory named.fixesat any depth in the repository. Given the placement near thedeploymentscomment, anchoring it would make the intent explicit: either/.fixes/for root-only ignoring, or/deployments/.fixes/if only deployment-related fixes should be ignored.Suggested change
-.fixes/ +/.fixes/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore at line 14, The .gitignore entry ".fixes/" is unanchored and will ignore any directory named .fixes at any depth; update the pattern to reflect intended scope by replacing ".fixes/" with an anchored path like "/.fixes/" to ignore only the repository root folder or "/deployments/.fixes/" to restrict it to the deployments directory—modify the .gitignore entry for ".fixes/" accordingly.test/src/concrete/Rainterpreter.stateOverlay.t.sol (1)
171-208: Consider verifyingkvsdeduplication for completeness.Unlike
testStateOverlayDuplicateKeyLastWins, this test discardskvswithout verifying its contents. For consistency and to confirm deduplication behavior with interleaved keys, consider adding assertions forkvs.lengthand its contents.♻️ Optional: Add kvs verification
- (StackItem[] memory stack,) = I_INTERPRETER.eval4( + (StackItem[] memory stack, bytes32[] memory kvs) = I_INTERPRETER.eval4( EvalV4({ store: I_STORE, namespace: LibNamespace.qualifyNamespace(StateNamespace.wrap(0), address(this)), bytecode: bytecode, sourceIndex: SourceIndexV2.wrap(0), context: new bytes32[][](0), inputs: new StackItem[](0), stateOverlay: stateOverlay }) ); assertEq(stack.length, 2, "two get ops should produce two stack items"); assertEq(StackItem.unwrap(stack[0]), v2Second, "get(2) should return the second value for k2"); assertEq(StackItem.unwrap(stack[1]), v1Second, "get(1) should return the second value for k1"); + assertEq(kvs.length, 4, "kvs should contain two deduplicated key-value pairs");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/concrete/Rainterpreter.stateOverlay.t.sol` around lines 171 - 208, Capture and verify the deduped kvs returned by I_INTERPRETER.eval4 in testStateOverlayDuplicateKeysInterleaved: change the return deconstruction to (StackItem[] memory stack, /*StateKV[]*/ memory kvs) = I_INTERPRETER.eval4(...) and add assertions that kvs.length == 2 and that the two entries contain the deduplicated keys/values (keys k1 and k2 with values v1Second and v2Second respectively) to confirm stateOverlay deduplication behavior; reference the stateOverlay, EvalV4 call, and local symbols k1,k2,v1Second,v2Second when adding the checks.test/src/lib/parse/LibParsePragma.keyword.t.sol (1)
241-243: Prefer computed cursor expectations over magic numbers.
60and118are brittle. Compute expected cursor deltas fromPRAGMA_KEYWORD_BYTES_LENGTHand string lengths to keep tests stable as parser details evolve.Suggested refactor
- // First pragma: "using-words-from " (17) + 42-char address + " " interstitial = 60. + // First pragma: keyword + whitespace + address + interstitial. uint256 cursor2 = state.parsePragma(cursor, end); - assertEq(cursor2, cursor + 60, "first pragma cursor"); + uint256 expectedFirstPragmaDiff = PRAGMA_KEYWORD_BYTES_LENGTH + 1 + 42 + 1; + assertEq(cursor2, cursor + expectedFirstPragmaDiff, "first pragma cursor");- // "using-words-from " (17) + 42 + " /* a comment */ " (17) + 42 = 118 - checkPragmaParsing(str, 118, values, "comment between addresses"); + uint256 expectedDiff = PRAGMA_KEYWORD_BYTES_LENGTH + 1 + 42 + bytes(" /* a comment */ ").length + 42; + checkPragmaParsing(str, expectedDiff, values, "comment between addresses");Also applies to: 270-271
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/lib/parse/LibParsePragma.keyword.t.sol` around lines 241 - 243, Tests use magic cursor deltas (60, 118) when asserting parsePragma behavior; replace these brittle literals with computed expectations derived from PRAGMA_KEYWORD_BYTES_LENGTH and the actual literal string lengths so the test adapts to parser changes. Update the assertions around parsePragma(cursor, end) (variables cursor and cursor2) to compute the expected delta as PRAGMA_KEYWORD_BYTES_LENGTH + bytes length of the address token (42) + length of the interstitial space/other literal segments used in the test, and use that computed value when asserting cursor2 == cursor + expectedDelta (apply the same pattern for the second assertion currently using 118).test/src/lib/parse/LibParseStackName.t.sol (1)
86-89: Avoid hardcoded post-push index in test setup.Use the returned
indexfrompushStackNameinstead of assigning1directly, so the test stays robust if internal indexing semantics evolve.Suggested test hardening
- LibParseStackName.pushStackName(state, word1); - state.lineTracker = 1; - state.topLevel1 = 1; + (, uint256 pushedIndex) = LibParseStackName.pushStackName(state, word1); + state.lineTracker = pushedIndex; + state.topLevel1 = pushedIndex;Also applies to: 133-136
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/lib/parse/LibParseStackName.t.sol` around lines 86 - 89, The test currently hardcodes post-push indices (setting state.lineTracker and state.topLevel1 to 1) after calling LibParseStackName.pushStackName; change the test to capture and use the returned index from LibParseStackName.pushStackName (e.g., assign the call result to a variable) and then set state.lineTracker and state.topLevel1 to that returned index so the test does not rely on the internal indexing being 1; make the same change for the other occurrence around the second block (lines ~133-136) referencing pushStackName.crates/eval/src/namespace.rs (1)
8-11: Consider replacing the44magic offset with named constants.This makes the 32-byte slot + 12-byte left-padding intent immediately obvious in future maintenance.
Readability refactor
pub fn qualify_namespace(state_namespace: B256, sender: Address) -> FullyQualifiedNamespace { - // Combine state namespace and sender into a single 64-byte array + const WORD_BYTES: usize = 32; + const ADDRESS_BYTES: usize = 20; + const ADDRESS_OFFSET: usize = WORD_BYTES + (WORD_BYTES - ADDRESS_BYTES); // 44 + // Combine state namespace and sender into a single 64-byte array let mut combined = [0u8; 64]; combined[..32].copy_from_slice(state_namespace.as_slice()); - combined[44..].copy_from_slice(sender.as_slice()); + combined[ADDRESS_OFFSET..].copy_from_slice(sender.as_slice());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/eval/src/namespace.rs` around lines 8 - 11, Replace the magic offsets with named constants: introduce STATE_LEN = 32 and LEFT_PADDING = 12 (or SENDER_OFFSET = STATE_LEN + LEFT_PADDING) and use those constants in place of the numeric indices in the combined buffer operations (replace combined[..32] with combined[..STATE_LEN] and combined[44..] with combined[SENDER_OFFSET..]) so the 32-byte slot + 12-byte left-padding intent is explicit for combined, state_namespace, and sender.test/src/lib/parse/LibSubParse.subParseWords.t.sol (1)
146-160: Consider using a specific revert selector for better test precision.Line 158 uses
vm.expectRevert()without specifying the expected error. For more precise testing, consider expecting the specificUnknownWorderror.♻️ Proposed improvement
+ import {UnknownWord} from "src/error/ErrParse.sol"; + vm.expectRevert(); + // Or more precisely: + // vm.expectRevert(abi.encodeWithSelector(UnknownWord.selector, "unknown-word")); this.externalParse(src);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/lib/parse/LibSubParse.subParseWords.t.sol` around lines 146 - 160, Update the testSubParseWordsUnknownWordReverts test to assert the specific UnknownWord revert instead of a generic vm.expectRevert(); capture the exact revert selector for the UnknownWord error and pass it to vm.expectRevert before calling this.externalParse(src). Reference the UnknownWord error selector and the test’s use of ContextReturningSubParser.subParseWord2 and this.externalParse to craft the expected revert payload (including the unknown-word identifier) so the test fails only for the precise UnknownWord error.test/src/lib/parse/literal/LibParseLiteral.dispatch.t.sol (1)
22-29: RedundantliteralParserFunctionPointers()assignment.
literalParserFunctionPointers()is passed tonewState()on line 24 and then assigned again tostate.literalParserson line 25. This pattern repeats in the other external wrappers (lines 37-39, 51-53). One of these assignments is likely redundant.♻️ Proposed simplification
function externalParseLiteral(bytes memory data) external view returns (uint256 newCursor, bytes32 value) { - ParseState memory state = - LibParseState.newState(data, "", "", LibAllStandardOps.literalParserFunctionPointers()); - state.literalParsers = LibAllStandardOps.literalParserFunctionPointers(); + bytes memory literalParsers = LibAllStandardOps.literalParserFunctionPointers(); + ParseState memory state = LibParseState.newState(data, "", "", literalParsers); + state.literalParsers = literalParsers; uint256 cursor = Pointer.unwrap(data.dataPointer()); uint256 end = cursor + data.length; return state.parseLiteral(cursor, end); }Or, if
newStatealready setsliteralParsers, remove the redundant assignment entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/lib/parse/literal/LibParseLiteral.dispatch.t.sol` around lines 22 - 29, The external wrapper externalParseLiteral redundantly passes LibAllStandardOps.literalParserFunctionPointers() to LibParseState.newState(...) and then immediately reassigns state.literalParsers; remove the redundant assignment by deleting the second line that sets state.literalParsers (or if newState does not already initialize literalParsers, instead remove the literalParserFunctionPointers() argument to newState) and apply the same cleanup to the other external wrappers that repeat the pattern (inspect the other external* wrapper functions to keep only one initialization of literalParsers).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/concrete/RainterpreterDISPaiRegistry.sol`:
- Around line 17-19: The supportsInterface function currently uses invalid
syntax type(ERC165).interfaceId and is declared pure; change supportsInterface
in RainterpreterDISPaiRegistry to be view (not pure) and replace the comparison
to type(ERC165).interfaceId with a call to super.supportsInterface(interfaceId),
i.e., return interfaceId == type(IDISPaiRegistry).interfaceId ||
super.supportsInterface(interfaceId); so the ERC165 parent implementation is
delegated to correctly; keep the function signature and referencing symbol
supportsInterface and IDISPaiRegistry as shown.
In `@src/lib/extern/reference/literal/LibParseLiteralRepeat.sol`:
- Around line 30-32: The guard that rejects oversized repeat literal bodies
incorrectly uses a >= comparison against MAX_REPEAT_LITERAL_LENGTH (symbol:
MAX_REPEAT_LITERAL_LENGTH), which disallows length == 78 despite the constant
and docs stating 78 is the max; update the conditional that currently uses ">="
(the length check around lines 59-60) to use ">" so values equal to
MAX_REPEAT_LITERAL_LENGTH are permitted, and ensure any related error message
still reflects the correct inclusive maximum.
In `@test/src/concrete/Rainterpreter.eval.t.sol`:
- Around line 48-49: Update the stale safety-bound comment next to the
forge-lint disable in the test: replace the incorrect "bounded to < 26" note
with the correct bound reflecting expectedInputs <= 15 (as set earlier in the
test). Locate the comment near the forge-lint directive in the
Rainterpreter.eval test (look for the expectedInputs variable and the
cast/unsafe-typecast suppression) and change the text to accurately state the <=
15 bound so the inline justification matches the actual constraint.
In `@test/src/concrete/RainterpreterStore.getUninitialized.t.sol`:
- Around line 31-32: Remove the per-test fuzz override comment `///
forge-config: default.fuzz.runs = 100` introduced above the test function; leave
the test function `testGetUninitializedKeyFuzz(StateNamespace namespace, bytes32
key)` unchanged and rely on the global fuzzing configuration (foundry.toml) so
tests under test/**/*.t.sol use the repo-wide 2048 runs setting. Ensure no other
per-test `forge-config` annotations are present in this file.
In `@test/src/concrete/RainterpreterStore.setEmpty.t.sol`:
- Around line 38-39: Remove the local Forge fuzz-run annotation in
test/src/concrete/RainterpreterStore.setEmpty.t.sol by deleting the line "///
forge-config: default.fuzz.runs = 100" so the test function
testSetEmptyArrayFuzz(StateNamespace) uses the global Foundry fuzz runs (2048);
make sure no other per-test or per-function "forge-config" fuzz annotations
remain in that file or adjacent test functions.
In `@test/src/concrete/RainterpreterStore.setEvent.t.sol`:
- Around line 61-62: Remove the per-test forge-config override that sets
`default.fuzz.runs = 100` from the test file; locate the annotation immediately
above the test function `testSetEventFQNMatchesQualifyNamespace` in
`RainterpreterStore.setEvent.t.sol` and delete that line so the test uses the
repository/global fuzz configuration (no per-test `forge-config` overrides).
In `@test/src/lib/parse/LibParsePragma.keyword.t.sol`:
- Around line 249-257: The test currently only checks two addresses via
state.subParsers and the loaded deref/pointer but doesn't assert the linked list
ends; after loading the first node into deref (the value checked against addr1)
read that node's next pointer (use the same pointer arithmetic: compute pointer
= uint256(deref) >> 0xF0 and mload(pointer) or load the "next" word from the
node) and assert it is zero (or the expected sentinel) to guarantee no extra
node exists; reference state.subParsers, deref, pointer, addr1 and addr2 when
locating where to add the termination assertion.
In `@test/src/lib/parse/LibParseState.pushLiteral.t.sol`:
- Around line 72-73: Update the misleading comment to reflect the actual test
implementation: state is shared across two separate buffers (data1 and data2)
with the shared LibParseState initialized with empty data, not a single buffer
with both literals separated by a space; modify the comment near the test setup
to mention data1 and data2 and that the state persists between calls rather than
implying a single combined buffer.
In `@test/src/lib/parse/LibSubParse.consumeSubParseWordInputData.t.sol`:
- Around line 30-35: The current assembly aliasing operandBytes := operandValues
treats the array length as byte length and truncates payload when operandValues
is non-empty; replace that aliasing and instead build operandBytes as a proper
packed encoding containing the element byte-length (32 * operandValues.length)
followed by the full contiguous elements (or simply use
abi.encodePacked(uint256(operandValues.length), operandValues) for clarity),
then use that operandBytes in the existing bytes.concat(...) alongside
constantsHeight, ioByte, uint16(word.length) and word so non-empty operandValues
are preserved.
---
Nitpick comments:
In @.gitignore:
- Line 14: The .gitignore entry ".fixes/" is unanchored and will ignore any
directory named .fixes at any depth; update the pattern to reflect intended
scope by replacing ".fixes/" with an anchored path like "/.fixes/" to ignore
only the repository root folder or "/deployments/.fixes/" to restrict it to the
deployments directory—modify the .gitignore entry for ".fixes/" accordingly.
In `@crates/eval/src/namespace.rs`:
- Around line 8-11: Replace the magic offsets with named constants: introduce
STATE_LEN = 32 and LEFT_PADDING = 12 (or SENDER_OFFSET = STATE_LEN +
LEFT_PADDING) and use those constants in place of the numeric indices in the
combined buffer operations (replace combined[..32] with combined[..STATE_LEN]
and combined[44..] with combined[SENDER_OFFSET..]) so the 32-byte slot + 12-byte
left-padding intent is explicit for combined, state_namespace, and sender.
In `@test/src/concrete/Rainterpreter.stateOverlay.t.sol`:
- Around line 171-208: Capture and verify the deduped kvs returned by
I_INTERPRETER.eval4 in testStateOverlayDuplicateKeysInterleaved: change the
return deconstruction to (StackItem[] memory stack, /*StateKV[]*/ memory kvs) =
I_INTERPRETER.eval4(...) and add assertions that kvs.length == 2 and that the
two entries contain the deduplicated keys/values (keys k1 and k2 with values
v1Second and v2Second respectively) to confirm stateOverlay deduplication
behavior; reference the stateOverlay, EvalV4 call, and local symbols
k1,k2,v1Second,v2Second when adding the checks.
In `@test/src/lib/op/math/LibOpSub.t.sol`:
- Around line 110-122: Add missing NatSpec comments above the three new test
functions to match the file's existing documentation pattern: place a short
`@dev/`@notice comment for testOpSubZeroOutputs describing it verifies the sub
operator with zero outputs calls checkBadOutputs(": sub(1 1);", 2, 1, 0), for
testOpSubTwoOutputs describing it verifies two output placeholders with
checkBadOutputs("_ _: sub(1 1);", 2, 1, 2), and for
testOpSubEvalTwoOperandsDisallowed describing it asserts disallowed operand-eval
combos by calling checkUnhappyParse with UnexpectedOperandValue for sub<0 0>,
sub<0 1>, and sub<1 0>; follow the exact NatSpec style used elsewhere in the
file (same tags, phrasing and placement immediately above each function).
In `@test/src/lib/parse/LibParsePragma.keyword.t.sol`:
- Around line 241-243: Tests use magic cursor deltas (60, 118) when asserting
parsePragma behavior; replace these brittle literals with computed expectations
derived from PRAGMA_KEYWORD_BYTES_LENGTH and the actual literal string lengths
so the test adapts to parser changes. Update the assertions around
parsePragma(cursor, end) (variables cursor and cursor2) to compute the expected
delta as PRAGMA_KEYWORD_BYTES_LENGTH + bytes length of the address token (42) +
length of the interstitial space/other literal segments used in the test, and
use that computed value when asserting cursor2 == cursor + expectedDelta (apply
the same pattern for the second assertion currently using 118).
In `@test/src/lib/parse/LibParseStackName.t.sol`:
- Around line 86-89: The test currently hardcodes post-push indices (setting
state.lineTracker and state.topLevel1 to 1) after calling
LibParseStackName.pushStackName; change the test to capture and use the returned
index from LibParseStackName.pushStackName (e.g., assign the call result to a
variable) and then set state.lineTracker and state.topLevel1 to that returned
index so the test does not rely on the internal indexing being 1; make the same
change for the other occurrence around the second block (lines ~133-136)
referencing pushStackName.
In `@test/src/lib/parse/LibSubParse.subParseWords.t.sol`:
- Around line 146-160: Update the testSubParseWordsUnknownWordReverts test to
assert the specific UnknownWord revert instead of a generic vm.expectRevert();
capture the exact revert selector for the UnknownWord error and pass it to
vm.expectRevert before calling this.externalParse(src). Reference the
UnknownWord error selector and the test’s use of
ContextReturningSubParser.subParseWord2 and this.externalParse to craft the
expected revert payload (including the unknown-word identifier) so the test
fails only for the precise UnknownWord error.
In `@test/src/lib/parse/literal/LibParseLiteral.dispatch.t.sol`:
- Around line 22-29: The external wrapper externalParseLiteral redundantly
passes LibAllStandardOps.literalParserFunctionPointers() to
LibParseState.newState(...) and then immediately reassigns state.literalParsers;
remove the redundant assignment by deleting the second line that sets
state.literalParsers (or if newState does not already initialize literalParsers,
instead remove the literalParserFunctionPointers() argument to newState) and
apply the same cleanup to the other external wrappers that repeat the pattern
(inspect the other external* wrapper functions to keep only one initialization
of literalParsers).
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
CLAUDE.mdis excluded by!CLAUDE.mdCargo.lockis excluded by!**/*.lockaudit/2026-02-17-03/triage.mdis excluded by!audit/**
📒 Files selected for processing (69)
.gitignoreCargo.tomlcrates/cli/Cargo.tomlcrates/cli/src/commands/eval.rscrates/cli/src/commands/parse.rscrates/dispair/Cargo.tomlcrates/dispair/src/lib.rscrates/eval/src/eval.rscrates/eval/src/fork.rscrates/eval/src/namespace.rscrates/eval/src/trace.rscrates/parser/Cargo.tomlcrates/test_fixtures/Cargo.tomlsrc/abstract/BaseRainterpreterSubParser.solsrc/concrete/RainterpreterDISPaiRegistry.solsrc/concrete/RainterpreterParser.solsrc/error/ErrSubParse.solsrc/lib/extern/reference/literal/LibParseLiteralRepeat.solsrc/lib/extern/reference/op/LibExternOpIntInc.solsrc/lib/op/bitwise/LibOpDecodeBits.solsrc/lib/op/store/LibOpSet.solsrc/lib/parse/LibParseState.soltest/src/concrete/Rainterpreter.eval.nonZeroSourceIndex.t.soltest/src/concrete/Rainterpreter.eval.t.soltest/src/concrete/Rainterpreter.stateOverlay.t.soltest/src/concrete/RainterpreterDISPaiRegistry.ierc165.t.soltest/src/concrete/RainterpreterParser.parsePragmaEmpty.t.soltest/src/concrete/RainterpreterReferenceExtern.repeat.t.soltest/src/concrete/RainterpreterReferenceExtern.subParserIndexOutOfBounds.t.soltest/src/concrete/RainterpreterStore.getUninitialized.t.soltest/src/concrete/RainterpreterStore.overwriteKey.t.soltest/src/concrete/RainterpreterStore.setEmpty.t.soltest/src/concrete/RainterpreterStore.setEvent.t.soltest/src/lib/op/LibAllStandardOps.t.soltest/src/lib/op/evm/LibOpBlockNumber.t.soltest/src/lib/op/evm/LibOpChainId.t.soltest/src/lib/op/evm/LibOpTimestamp.t.soltest/src/lib/op/math/LibOpFloor.t.soltest/src/lib/op/math/LibOpInv.t.soltest/src/lib/op/math/LibOpMax.t.soltest/src/lib/op/math/LibOpMin.t.soltest/src/lib/op/math/LibOpSqrt.t.soltest/src/lib/op/math/LibOpSub.t.soltest/src/lib/op/math/uint256/LibOpMaxUint256.t.soltest/src/lib/parse/LibParse.namedLHS.t.soltest/src/lib/parse/LibParse.unexpectedRHS.t.soltest/src/lib/parse/LibParseError.t.soltest/src/lib/parse/LibParseInterstitial.t.soltest/src/lib/parse/LibParseOperand.handleOperand.t.soltest/src/lib/parse/LibParseOperand.handleOperand8M1M1.t.soltest/src/lib/parse/LibParseOperand.handleOperandM1M1.t.soltest/src/lib/parse/LibParseOperand.parseOperand.t.soltest/src/lib/parse/LibParsePragma.keyword.t.soltest/src/lib/parse/LibParseStackName.t.soltest/src/lib/parse/LibParseState.buildBytecode.t.soltest/src/lib/parse/LibParseState.buildConstants.t.soltest/src/lib/parse/LibParseState.endSource.t.soltest/src/lib/parse/LibParseState.pushLiteral.t.soltest/src/lib/parse/LibParseState.pushOpToSource.t.soltest/src/lib/parse/LibSubParse.constantAccumulation.t.soltest/src/lib/parse/LibSubParse.consumeSubParseLiteralInputData.t.soltest/src/lib/parse/LibSubParse.consumeSubParseWordInputData.t.soltest/src/lib/parse/LibSubParse.subParseLiteral.t.soltest/src/lib/parse/LibSubParse.subParseWords.t.soltest/src/lib/parse/LibSubParse.unknownWord.t.soltest/src/lib/parse/literal/LibParseLiteral.dispatch.t.soltest/src/lib/parse/literal/LibParseLiteralHex.parseHex.t.soltest/src/lib/parse/literal/LibParseLiteralString.boundString.t.soltest/src/lib/parse/literal/LibParseLiteralString.parseString.t.sol
💤 Files with no reviewable changes (1)
- crates/test_fixtures/Cargo.toml
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix inaccurate overflow comments in LibParseLiteralRepeat (10^78 > 2^256) - Fix stale bound comment in eval test (<26 -> <=15) - Remove per-test fuzz.runs=100 overrides in three RainterpreterStore tests - Add linked-list termination assertion in pragma keyword test - Fix misleading comment in pushLiteral test - Fix buildInputData helper that truncated non-empty operand arrays - Add test for non-empty operand values in consumeSubParseWordInputData Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
test/src/lib/parse/LibSubParse.consumeSubParseWordInputData.t.sol (2)
9-17: NatSpec block mixes tagged and untagged entries.Line 11 onward continues documentation without explicit tags after using
@title/@notice. Please make each entry explicitly tagged to match repo docs rules.💡 Suggested doc-block adjustment
/// `@title` LibSubParseConsumeSubParseWordInputDataTest /// `@notice` Direct unit tests for `LibSubParse.consumeSubParseWordInputData`. -/// This function unpacks a sub-parse header from encoded `bytes` input, -/// extracting the constants height, IO byte, and constructing a fresh -/// `ParseState` from the remaining word string and provided meta/operand -/// handler bytes. +/// `@dev` Unpacks a sub-parse header from encoded `bytes` input, extracting +/// `@dev` constants height, IO byte, and constructing a fresh `ParseState` +/// `@dev` from remaining word bytes and provided meta/operand handlers. /// -/// The input data format is: -/// [constantsHeight:2 bytes][ioByte:1 byte][wordLength:2 bytes][wordData:wordLength bytes][operandValues...] +/// `@dev` Input data format: +/// `@dev` [constantsHeight:2][ioByte:1][wordLength:2][wordData:wordLength][operandValues...]As per coding guidelines "When a NatSpec doc block contains any explicit tag (e.g.
@title), all entries must be explicitly tagged".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/lib/parse/LibSubParse.consumeSubParseWordInputData.t.sol` around lines 9 - 17, The NatSpec on the test contract mixes tagged and untagged entries; update the docblock in LibSubParse.consumeSubParseWordInputData test (LibSubParseConsumeSubParseWordInputDataTest) so every line uses explicit NatSpec tags: keep `@title` and `@notice` and add tags such as `@dev` or `@param/`@return as appropriate for the remaining descriptions (e.g., replace the untagged paragraph describing input format with an explicit `@dev` or `@notice` line and tag the format details), ensuring all documentation entries are consistently tagged.
25-43: Guard against silentword.lengthtruncation in test helper.Line 42 casts
word.lengthtouint16. If a future test passes a longer word, header length wraps and creates malformed input. Add an explicit bound check inbuildInputData.💡 Suggested helper hardening
function buildInputData(uint16 constantsHeight, uint8 ioByte, bytes memory word, bytes32[] memory operandValues) internal pure returns (bytes memory) { + require(word.length <= type(uint16).max, "word too long"); // Encode the operand values as raw memory: length word + value words. // Cannot alias bytes32[] as bytes directly because the length field // semantics differ (element count vs byte count). Encode manually.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/lib/parse/LibSubParse.consumeSubParseWordInputData.t.sol` around lines 25 - 43, The helper buildInputData silently truncates word.length to uint16; add an explicit guard at the start of buildInputData that checks word.length <= type(uint16).max (or <= 0xFFFF) and revert (or require) with a clear message if it exceeds that bound to prevent header wrap/ malformed inputs; ensure the check references the function buildInputData and the variable word.length so future tests fail early instead of producing truncated headers.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/src/lib/parse/LibSubParse.consumeSubParseWordInputData.t.sol`:
- Around line 9-17: The NatSpec on the test contract mixes tagged and untagged
entries; update the docblock in LibSubParse.consumeSubParseWordInputData test
(LibSubParseConsumeSubParseWordInputDataTest) so every line uses explicit
NatSpec tags: keep `@title` and `@notice` and add tags such as `@dev` or
`@param/`@return as appropriate for the remaining descriptions (e.g., replace the
untagged paragraph describing input format with an explicit `@dev` or `@notice` line
and tag the format details), ensuring all documentation entries are consistently
tagged.
- Around line 25-43: The helper buildInputData silently truncates word.length to
uint16; add an explicit guard at the start of buildInputData that checks
word.length <= type(uint16).max (or <= 0xFFFF) and revert (or require) with a
clear message if it exceeds that bound to prevent header wrap/ malformed inputs;
ensure the check references the function buildInputData and the variable
word.length so future tests fail early instead of producing truncated headers.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
src/generated/RainterpreterReferenceExtern.pointers.solis excluded by!**/generated/**
📒 Files selected for processing (9)
src/concrete/RainterpreterDISPaiRegistry.solsrc/lib/extern/reference/literal/LibParseLiteralRepeat.soltest/src/concrete/Rainterpreter.eval.t.soltest/src/concrete/RainterpreterStore.getUninitialized.t.soltest/src/concrete/RainterpreterStore.setEmpty.t.soltest/src/concrete/RainterpreterStore.setEvent.t.soltest/src/lib/parse/LibParsePragma.keyword.t.soltest/src/lib/parse/LibParseState.pushLiteral.t.soltest/src/lib/parse/LibSubParse.consumeSubParseWordInputData.t.sol
🚧 Files skipped from review as they are similar to previous changes (4)
- test/src/concrete/RainterpreterStore.getUninitialized.t.sol
- test/src/concrete/RainterpreterStore.setEmpty.t.sol
- test/src/concrete/RainterpreterStore.setEvent.t.sol
- test/src/lib/parse/LibParseState.pushLiteral.t.sol
There was a problem hiding this comment.
🧹 Nitpick comments (2)
crates/cli/src/commands/eval.rs (1)
126-130: Consider importingRainEvalResultFromRawCallResultErrorto reduce verbosity.The explicit full-path type annotation works but is verbose. Since
RainEvalResultis already imported from the same module, importing the error type would improve readability.♻️ Suggested refactor
Add to imports:
-use rain_interpreter_eval::trace::RainEvalResult; +use rain_interpreter_eval::trace::{RainEvalResult, RainEvalResultFromRawCallResultError};Then simplify the closure:
- let rain_eval_result: RainEvalResult = res.try_into().map_err( - |e: rain_interpreter_eval::trace::RainEvalResultFromRawCallResultError| { - anyhow!(e) - }, - )?; + let rain_eval_result: RainEvalResult = res + .try_into() + .map_err(|e: RainEvalResultFromRawCallResultError| anyhow!(e))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/commands/eval.rs` around lines 126 - 130, Import RainEvalResultFromRawCallResultError from rain_interpreter_eval::trace and simplify the closure where you convert res into RainEvalResult: replace the fully-qualified error type in the map_err closure with the imported RainEvalResultFromRawCallResultError so the line using res.try_into().map_err(|e: RainEvalResultFromRawCallResultError| { anyhow!(e) })? becomes shorter and more readable while keeping the same behavior.crates/eval/src/fork.rs (1)
589-596: Fix local variable typo for readability.
fully_quallified_namespaceis misspelled; rename tofully_qualified_namespace.✏️ Suggested rename
- let fully_quallified_namespace = qualify_namespace(namespace.into(), from_address); + let fully_qualified_namespace = qualify_namespace(namespace.into(), from_address); ... - namespace: fully_quallified_namespace.into(), + namespace: fully_qualified_namespace.into(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/eval/src/fork.rs` around lines 589 - 596, The local variable name fully_quallified_namespace is misspelled; rename it to fully_qualified_namespace where it’s declared (result of qualify_namespace(namespace.into(), from_address)) and update all its usages (e.g., the namespace field passed into forker.alloy_call / getCall) to the new name to keep naming consistent and readable.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@crates/cli/src/commands/eval.rs`:
- Around line 126-130: Import RainEvalResultFromRawCallResultError from
rain_interpreter_eval::trace and simplify the closure where you convert res into
RainEvalResult: replace the fully-qualified error type in the map_err closure
with the imported RainEvalResultFromRawCallResultError so the line using
res.try_into().map_err(|e: RainEvalResultFromRawCallResultError| { anyhow!(e)
})? becomes shorter and more readable while keeping the same behavior.
In `@crates/eval/src/fork.rs`:
- Around line 589-596: The local variable name fully_quallified_namespace is
misspelled; rename it to fully_qualified_namespace where it’s declared (result
of qualify_namespace(namespace.into(), from_address)) and update all its usages
(e.g., the namespace field passed into forker.alloy_call / getCall) to the new
name to keep naming consistent and readable.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
crates/cli/src/commands/eval.rscrates/cli/src/commands/parse.rscrates/eval/src/fork.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/cli/src/commands/parse.rs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@coderabbitai assess this PR size classification for the totality of the PR with the following criterias and report it in your comment: S/M/L PR Classification Guidelines:This guide helps classify merged pull requests by effort and complexity rather than just line count. The goal is to assess the difficulty and scope of changes after they have been completed. Small (S)Characteristics:
Review Effort: Would have taken 5-10 minutes Examples:
Medium (M)Characteristics:
Review Effort: Would have taken 15-30 minutes Examples:
Large (L)Characteristics:
Review Effort: Would have taken 45+ minutes Examples:
Additional Factors to ConsiderWhen deciding between sizes, also consider:
Notes:
|
|
Tip For best results, initiate chat on the files or code changes. SIZE=L |
Motivation
Solution
Checks
By submitting this for review, I'm confirming I've done the following:
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores