Skip to content

Fail fast on missing topic and data in decode logic#2260

Merged
findolor merged 6 commits intomainfrom
local-db-decode-fail
Oct 30, 2025
Merged

Fail fast on missing topic and data in decode logic#2260
findolor merged 6 commits intomainfrom
local-db-decode-fail

Conversation

@findolor
Copy link
Copy Markdown
Collaborator

@findolor findolor commented Oct 22, 2025

Motivation

See issue: #2223

Solution

  • Add a new error variant
  • Update decode logic to fail fast on missing topic or invalid data
  • Update tests

Checks

By submitting this for review, I'm confirming I've done the following:

  • made this PR as small as possible
  • unit-tested any new functionality
  • linked any relevant issues or PRs
  • included screenshots (if this involves a front-end change)

Summary by CodeRabbit

  • Bug Fixes

    • Improved error reporting during event decoding to pinpoint which required field and which item index is missing; stricter validation now catches empty/zero-length data.
    • Existing successful decode behavior preserved for valid events.
  • Tests

    • Updated tests to cover missing fields and zero-length data scenarios.

@findolor findolor requested review from 0xgleb and hardyjosh October 22, 2025 06:30
@findolor findolor self-assigned this Oct 22, 2025
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 22, 2025

Walkthrough

Added a new DecodeError::MissingRequiredField { field: &'static str, index: usize } variant and refactored decode_events to an iterator-based pipeline that validates topic0 and data, normalizes hex data, decodes by event type, and collects results or returns per-item errors.

Changes

Cohort / File(s) Change Summary
Error handling & decode refactor
crates/common/src/local_db/decode.rs
Added DecodeError::MissingRequiredField { field: &'static str, index: usize }; replaced imperative per-item loop with an enumerate().map(...).collect() pipeline in decode_events that validates topic0 and data (including zero-length hex after prefix stripping), normalizes data, determines event type, decodes payloads, and constructs DecodedEventData. Tests updated to expect the new error cases.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant decode_events
    participant Validator as Validator (topic0/data)
    participant Decoder as EventDecoder
    participant Collector as ResultCollector

    Caller->>decode_events: call decode_events(events)
    decode_events->>Validator: enumerate + validate topic0 & data
    alt missing required field
        Validator-->>decode_events: MissingRequiredField { field, index }
        decode_events-->>Caller: Err(MissingRequiredField)
    else valid entry
        Validator-->>decode_events: normalized topic & data
        decode_events->>Decoder: determine event type & decode payload
        Decoder-->>decode_events: DecodedEventData | Unknown
        decode_events->>Collector: map result
        Collector-->>decode_events: collected Vec<Result<DecodedEventData, DecodeError>>
        decode_events-->>Caller: Ok(vec) or Err(first_error)
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Single-file change but introduces a different control-flow style (iterator pipeline) and a new public error variant requiring attention.
  • Areas to focus review on:
    • Correct indexing and error attribution for MissingRequiredField { index }.
    • Hex prefix stripping and zero-length handling semantics.
    • Preservation of existing behavior for Unknown event branch and field population in DecodedEventData.
    • Test coverage alignment with new error cases.

Suggested reviewers

  • hardyjosh
  • 0xgleb

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "Fail fast on missing topic and data in decode logic" directly and accurately describes the main change in the pull request. According to the summary, the core modification refactors the decode_events function to validate the presence of topic0 and data, returning a new MissingRequiredField error variant when either is missing—which is exactly the "fail fast" behavior on missing topic and data described in the title. The title is concise, specific, and clearly communicates the primary objective to someone reviewing commit history without being vague or misleading.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch local-db-decode-fail

Comment @coderabbitai help to get the list of available commands and usage tips.

@findolor findolor linked an issue Oct 22, 2025 that may be closed by this pull request
@findolor findolor added this to the LocalDB Implementation milestone Oct 22, 2025
@findolor
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 22, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b7dd54f and 787af21.

📒 Files selected for processing (1)
  • crates/common/src/local_db/decode.rs (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (17)
  • GitHub Check: Deploy-Docs-Preview
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-test)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-artifacts)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-wasm-test)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-rs-artifacts, true)
  • GitHub Check: standard-tests (ubuntu-latest, test-js-bindings)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-wasm-artifacts)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-rs-static)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-legal)
  • GitHub Check: standard-tests (ubuntu-latest, ob-rs-test, true)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-static)
  • GitHub Check: Deploy-Preview-Push
  • GitHub Check: build-tauri (ubuntu-22.04, true)
  • GitHub Check: test
  • GitHub Check: git-clean
  • GitHub Check: test
  • GitHub Check: test
🔇 Additional comments (2)
crates/common/src/local_db/decode.rs (2)

24-25: New error variant is clear and useful.

Adds precise context (field, index) for fail‑fast. Good addition.


878-883: Test for missing topic passes and matches new error.

Asserts MissingRequiredField("topic0", index 0). Looks correct.

Comment thread crates/common/src/local_db/decode.rs
Comment thread crates/common/src/local_db/decode.rs
Comment on lines +119 to +122
let data_without_prefix = trimmed_data
.strip_prefix("0x")
.or_else(|| trimmed_data.strip_prefix("0X"))
.unwrap_or(trimmed_data);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't we have a helper for this?

Base automatically changed from local-db-fetch-config to main October 30, 2025 16:56
@findolor findolor enabled auto-merge October 30, 2025 19:38
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 787af21 and 7bf7fa1.

📒 Files selected for processing (1)
  • crates/common/src/local_db/decode.rs (3 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
crates/*/{src,tests,benches,examples}/**/*.rs

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Rust code must pass clippy with all warnings denied (cargo clippy --workspace --all-targets --all-features -D warnings)

Files:

  • crates/common/src/local_db/decode.rs
crates/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

Rust code lives in the workspace under crates/* (e.g., cli, common, bindings, js_api, quote, subgraph, settings, math, integration_tests)

Files:

  • crates/common/src/local_db/decode.rs
**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/*.rs: Format Rust with nix develop -c cargo fmt --all
Lint Rust with nix develop -c rainix-rs-static (preconfigured flags)
Use snake_case for Rust crates/modules and PascalCase for types

Files:

  • crates/common/src/local_db/decode.rs
🧠 Learnings (3)
📓 Common learnings
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2117
File: crates/js_api/src/gui/select_tokens.rs:421-424
Timestamp: 2025-09-02T08:04:34.729Z
Learning: User findolor in the rainlanguage/rain.orderbook codebase prefers not to change test code when tests are already passing, even if there appear to be type mismatches or potential issues. Working tests should be left alone unless there are actual errors.
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1956
File: crates/common/src/fuzz/mod.rs:64-64
Timestamp: 2025-07-04T09:02:57.301Z
Learning: In rainlanguage/rain.orderbook, user findolor prefers to limit type consistency changes to only the parts directly related to the current work scope. For example, when updating chain_id fields from u64 to u32 in fuzz-related code, unrelated files like tauri-app wallet commands can remain as u64 if they serve different purposes and aren't part of the current changes.
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2000
File: crates/common/src/raindex_client/vaults.rs:183-183
Timestamp: 2025-07-16T10:40:05.717Z
Learning: In the rainlanguage/rain.orderbook codebase, user findolor considers breaking changes from Option<U256> to U256 for required fields like decimals in RaindexVaultToken to be acceptable and safe, even when they affect multiple usage sites across the codebase.
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:53.708Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, provide a separate comment after each review with PR size statistics in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding lock files like package-lock.json and cargo.lock.
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:53.708Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, provide a separate comment after each review with PR size statistics in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding lock files like package-lock.json and cargo.lock.
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:53.708Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, provide a separate comment after each review with PR size statistics in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding lock files like package-lock.json and cargo.lock.
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:53.708Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, provide a separate comment after each review with PR size statistics in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding lock files like package-lock.json and cargo.lock.
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T22:47:21.927Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, the PR size report must always be placed at the very beginning of any comment, before any learning used section or other content.
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T22:47:21.927Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, the PR size report must always be placed at the very beginning of any comment, before any learning used section or other content.
Learnt from: brusherru
PR: rainlanguage/rain.orderbook#2044
File: tauri-app/src/routes/orders/[chainId]-[orderbook]-[orderHash]/+page.svelte:76-78
Timestamp: 2025-07-31T19:03:56.594Z
Learning: In the rainlanguage/rain.orderbook project, when the Tauri app has issues preventing proper testing, the team prefers to defer Tauri-specific implementations to separate PRs rather than including untested code, especially for financial operations like withdrawals.
📚 Learning: 2025-07-21T16:34:31.193Z
Learnt from: 0xgleb
PR: rainlanguage/rain.orderbook#1911
File: crates/subgraph/src/types/impls.rs:7-15
Timestamp: 2025-07-21T16:34:31.193Z
Learning: In the rainlanguage/rain.orderbook codebase, user 0xgleb considers breaking changes that remove unsafe default behaviors to be intentional and acceptable. Specifically, the get_decimals() method in crates/subgraph/src/types/impls.rs was intentionally changed to return MissingDecimals error instead of defaulting to 18 decimals, as defaulting to 18 is considered unsafe and should never have been done.

Applied to files:

  • crates/common/src/local_db/decode.rs
📚 Learning: 2025-07-23T13:03:40.669Z
Learnt from: 0xgleb
PR: rainlanguage/rain.orderbook#1911
File: crates/subgraph/src/vault_balance_changes_query.rs:133-135
Timestamp: 2025-07-23T13:03:40.669Z
Learning: User 0xgleb prefers not to make changes to test code when tests are already passing, even if the code format could be standardized differently. Working test code should be left alone unless there's a functional issue.

Applied to files:

  • crates/common/src/local_db/decode.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-rs-static)
  • GitHub Check: git-clean
  • GitHub Check: test
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-artifacts)
  • GitHub Check: standard-tests (ubuntu-latest, ob-rs-test, true)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-legal)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-wasm-test)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-test)
  • GitHub Check: build-tauri (ubuntu-22.04, true)
  • GitHub Check: Deploy-Preview-Push
  • GitHub Check: Deploy-Docs-Preview
🔇 Additional comments (3)
crates/common/src/local_db/decode.rs (3)

24-25: LGTM! Clear error reporting with context.

The new MissingRequiredField variant provides excellent debugging context by including both the field name and the index of the problematic event in the batch.


107-186: Excellent refactoring to iterator-based fail-fast validation.

The refactored implementation properly:

  • Validates required fields (topic0, data) before attempting decode
  • Provides index-based error context for batch operations
  • Uses idiomatic iterator patterns with proper error propagation
  • Preserves existing behavior for metadata field defaulting

904-917: LGTM! Comprehensive coverage for zero-length hex edge case.

The test correctly validates that both "0x" and "0X" are rejected as missing data, addressing the previous review comment's suggestion. The assertion properly checks both the field name and index.

Comment on lines +118 to 128
let trimmed_data = event.data.trim();
let data_without_prefix = trimmed_data
.strip_prefix("0x")
.or_else(|| trimmed_data.strip_prefix("0X"))
.unwrap_or(trimmed_data);
if data_without_prefix.trim().is_empty() {
return Err(DecodeError::MissingRequiredField {
field: "data",
index,
});
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Remove redundant trim() on line 123; validation correctly handles "0x"/"0X" as empty.

The validation logic correctly strips hex prefixes and rejects zero-length hex data, addressing the previous review concern. However, line 123 calls .trim() on data_without_prefix even though the data was already trimmed at line 118.

Apply this diff to remove the redundant trim:

-            if data_without_prefix.trim().is_empty() {
+            if data_without_prefix.is_empty() {

Note: The current implementation correctly rejects both "0x" and "0X" as missing data (confirmed by the new test at lines 904-917), so the previous review comment's concern is fully addressed.

🤖 Prompt for AI Agents
crates/common/src/local_db/decode.rs around lines 118 to 128: remove the
redundant trim call on line 123 by checking data_without_prefix.is_empty()
instead of data_without_prefix.trim().is_empty(), since event.data was already
trimmed and strip_prefix handles "0x"/"0X"; update that conditional to return
the same DecodeError when empty.

Comment on lines +119 to +122
let data_without_prefix = trimmed_data
.strip_prefix("0x")
.or_else(|| trimmed_data.strip_prefix("0X"))
.unwrap_or(trimmed_data);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify if a hex prefix stripping helper exists in the codebase.

The manual prefix stripping is correct and handles both "0x" and "0X" cases. However, as noted in the past review comment, there might be an existing helper function for this common operation.

Based on learnings

Run this script to check for existing hex prefix utilities:


🏁 Script executed:

#!/bin/bash
# Search for potential hex prefix helper functions
rg -n 'fn.*strip.*prefix|fn.*trim.*0x|strip_hex_prefix' --type rust -C 2

# Check if alloy provides utilities
rg -n 'use.*alloy.*hex' --type rust -C 1

# Look for similar patterns in the codebase
rg -n 'strip_prefix.*0x.*strip_prefix.*0X' --type rust -C 2

Length of output: 7958



---

🌐 Web query:

```web
alloy crate hex prefix stripping utility function

💡 Result:

Use Rust's standard string helpers (strip_prefix or trim_start_matches). Example utility for Alloy (handles None/Some and preserves casing):

  • References: alloy_primitives Bytes supports "0x" prefixed hex strings. [1]
  • Rust str::strip_prefix docs. [2]

Code (Rust):

fn strip_0x(s: &str) -> &str {
s.strip_prefix("0x")
.or_else(|| s.strip_prefix("0X"))
.unwrap_or(s)
}

If you need an owned String:

fn strip_0x_owned(s: &str) -> String {
strip_0x(s).to_string()
}

Or using trim_start_matches (case-insensitive by checking both):

fn strip_0x_fold(s: &str) -> &str {
if s.len() >= 2 && (&s[..2].eq_ignore_ascii_case("0x")) { &s[2..] } else { s }
}

Use these before passing hex strings to alloy types/parsers.

Sources: [1] alloy_primitives Bytes docs. [2] Rust str::strip_prefix documentation.



---

🏁 Script executed:

```shell
#!/bin/bash
# View the context around lines 119-122 in decode.rs
cat crates/common/src/local_db/decode.rs | sed -n '110,130p'

# Check for similar prefix stripping patterns in same file
rg -n 'strip_prefix|trim_start_matches' crates/common/src/local_db/decode.rs -C 3

Length of output: 1543


No hex prefix stripping helper exists; however, fix inconsistent prefix handling at line 220.

The implementation at lines 119-122 correctly handles both "0x" and "0X" prefixes and matches the idiomatic Rust approach. However, line 220 in the same file uses trim_start_matches("0x"), which only handles the lowercase variant. Update line 220 to use the same robust approach:

let data = hex::decode(trimmed_data
    .strip_prefix("0x")
    .or_else(|| trimmed_data.strip_prefix("0X"))
    .unwrap_or(trimmed_data)
).map_err(DecodeError::HexDecode)?;
🤖 Prompt for AI Agents
crates/common/src/local_db/decode.rs around lines 119-122 and line 220: the code
at 119-122 correctly strips both "0x" and "0X" but line 220 only trims "0x"
(lowercase) causing inconsistent behavior; update the logic at line 220 to
mirror the robust approach used above by using strip_prefix("0x").or_else(||
trimmed_data.strip_prefix("0X")).unwrap_or(trimmed_data) before calling
hex::decode and propagate the DecodeError::HexDecode on failure.

@findolor findolor merged commit 63bdea2 into main Oct 30, 2025
19 checks passed
@github-actions
Copy link
Copy Markdown
Contributor

@coderabbitai assess this PR size classification for the totality of the PR with the following criterias and report it in your comment:

S/M/L PR Classification Guidelines:

This guide helps classify merged pull requests by effort and complexity rather than just line count. The goal is to assess the difficulty and scope of changes after they have been completed.

Small (S)

Characteristics:

  • Simple bug fixes, typos, or minor refactoring
  • Single-purpose changes affecting 1-2 files
  • Documentation updates
  • Configuration tweaks
  • Changes that require minimal context to review

Review Effort: Would have taken 5-10 minutes

Examples:

  • Fix typo in variable name
  • Update README with new instructions
  • Adjust configuration values
  • Simple one-line bug fixes
  • Import statement cleanup

Medium (M)

Characteristics:

  • Feature additions or enhancements
  • Refactoring that touches multiple files but maintains existing behavior
  • Breaking changes with backward compatibility
  • Changes requiring some domain knowledge to review

Review Effort: Would have taken 15-30 minutes

Examples:

  • Add new feature or component
  • Refactor common utility functions
  • Update dependencies with minor breaking changes
  • Add new component with tests
  • Performance optimizations
  • More complex bug fixes

Large (L)

Characteristics:

  • Major feature implementations
  • Breaking changes or API redesigns
  • Complex refactoring across multiple modules
  • New architectural patterns or significant design changes
  • Changes requiring deep context and multiple review rounds

Review Effort: Would have taken 45+ minutes

Examples:

  • Complete new feature with frontend/backend changes
  • Protocol upgrades or breaking changes
  • Major architectural refactoring
  • Framework or technology upgrades

Additional Factors to Consider

When deciding between sizes, also consider:

  • Test coverage impact: More comprehensive test changes lean toward larger classification
  • Risk level: Changes to critical systems bump up a size category
  • Team familiarity: Novel patterns or technologies increase complexity

Notes:

  • the assessment must be for the totality of the PR, that means comparing the base branch to the last commit of the PR
  • the assessment output must be exactly one of: S, M or L (single-line comment) in format of: SIZE={S/M/L}
  • do not include any additional text, only the size classification
  • your assessment comment must not include tips or additional sections
  • do NOT tag me or anyone else on your comment

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 30, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the PR diff statistics and file changes
gh pr view 2260 --repo rainlanguage/rain.orderbook --json files,additions,deletions | jq -r '.files[] | "\(.path): +\(.additions) -\(.deletions)"'
echo "---"
gh pr view 2260 --repo rainlanguage/rain.orderbook --json additions,deletions,changedFiles | jq -r '"Total files: \(.changedFiles), Additions: \(.additions), Deletions: \(.deletions)"'

Length of output: 96


SIZE=M


🧠 Learnings used
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:42.275Z
Learning: For PR #1884, after each commit review, I should provide a new comment with a PR size report in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding irrelevant files such as lock files (package-lock.json, cargo.lock, etc.).

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:42.275Z
Learning: After each commit in PR #1884, I should provide a new comment with the PR size report in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`. The report should exclude irrelevant files such as lock files (e.g., package-lock.json, cargo.lock).

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:25:47.873Z
Learning: After each new commit review for rainlanguage/rain.orderbook repository, generate a PR size report in a separate comment with the format: "TOTAL=number ADDITIONS=number DELETIONS=number". This report should exclude irrelevant files such as lock files (e.g., package-lock.json, cargo.lock).

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:25:47.873Z
Learning: After each new commit review for rainlanguage/rain.orderbook repository, generate a PR size report in a separate comment with the exact format: "TOTAL=number\nADDITIONS=number\nDELETIONS=number". This report should exclude irrelevant files such as lock files (package-lock.json, yarn.lock, pnpm-lock.yaml, cargo.lock).

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T22:14:22.646Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, I must consistently report the PR size after EVERY new commit in the exact format "TOTAL=number\nADDITIONS=number\nDELETIONS=number" without any formatting blocks. The report must exclude lock files (package-lock.json, yarn.lock, pnpm-lock.yaml, cargo.lock) and can be included either as part of the review or as a separate comment.

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T22:14:22.646Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, I must consistently report the PR size after EVERY new commit in the exact format "TOTAL=number\nADDITIONS=number\nDELETIONS=number" without any additional text or formatting blocks. The report must exclude lock files (package-lock.json, yarn.lock, pnpm-lock.yaml, cargo.lock) and can be included either as part of the review or as a separate comment.

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T22:46:08.530Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, the PR size report should be the only content in the comment - no text before it, no text after it, no formatting blocks, just the raw report in the exact format: "TOTAL=number\nADDITIONS=number\nDELETIONS=number". The report must exclude lock files (package-lock.json, yarn.lock, pnpm-lock.yaml, cargo.lock).

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:53.708Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, provide a separate comment after each review with PR size statistics in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding lock files like package-lock.json and cargo.lock.

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:53.708Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, provide a separate comment after each review with PR size statistics in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding lock files like package-lock.json and cargo.lock.

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:53.708Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, provide a separate comment after each review with PR size statistics in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding lock files like package-lock.json and cargo.lock.

Learnt from: findolor
PR: rainlanguage/rain.orderbook#1971
File: .github/workflows/vercel-docs-prod.yaml:32-34
Timestamp: 2025-07-11T12:30:09.245Z
Learning: findolor prefers not to pin Node versions explicitly in GitHub Actions workflows when using Nix with webapp-shell, considering the current setup sufficient for deterministic builds in the rainlanguage/rain.orderbook project.

Learnt from: findolor
PR: rainlanguage/rain.orderbook#1971
File: .github/workflows/vercel-docs-prod.yaml:10-14
Timestamp: 2025-07-11T12:29:49.284Z
Learning: In GitHub Actions workflows for the rainlanguage/rain.orderbook project, findolor is comfortable with job names that may seem misleading (like "Deploy-Docs-Preview" in a production workflow) when there's only one job in the workflow, as the context makes it clear enough.

Learnt from: findolor
PR: rainlanguage/rain.orderbook#2117
File: crates/js_api/src/gui/select_tokens.rs:421-424
Timestamp: 2025-09-02T08:04:34.729Z
Learning: User findolor in the rainlanguage/rain.orderbook codebase prefers not to change test code when tests are already passing, even if there appear to be type mismatches or potential issues. Working tests should be left alone unless there are actual errors.

Learnt from: findolor
PR: rainlanguage/rain.orderbook#1956
File: crates/common/src/fuzz/mod.rs:64-64
Timestamp: 2025-07-04T09:02:57.301Z
Learning: In rainlanguage/rain.orderbook, user findolor prefers to limit type consistency changes to only the parts directly related to the current work scope. For example, when updating chain_id fields from u64 to u32 in fuzz-related code, unrelated files like tauri-app wallet commands can remain as u64 if they serve different purposes and aren't part of the current changes.

Learnt from: findolor
PR: rainlanguage/rain.orderbook#2000
File: crates/common/src/raindex_client/vaults.rs:183-183
Timestamp: 2025-07-16T10:40:05.717Z
Learning: In the rainlanguage/rain.orderbook codebase, user findolor considers breaking changes from Option<U256> to U256 for required fields like decimals in RaindexVaultToken to be acceptable and safe, even when they affect multiple usage sites across the codebase.

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T22:47:21.927Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, the PR size report must always be placed at the very beginning of any comment, before any learning used section or other content.

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:53.708Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, provide a separate comment after each review with PR size statistics in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding lock files like package-lock.json and cargo.lock.

Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T22:47:21.927Z
Learning: For PR #1884 in rainlanguage/rain.orderbook, the PR size report must always be placed at the very beginning of any comment, before any learning used section or other content.

Learnt from: findolor
PR: rainlanguage/rain.orderbook#1938
File: crates/js_api/src/raindex/mod.rs:102-118
Timestamp: 2025-06-18T12:56:44.290Z
Learning: In the rainlanguage/rain.orderbook codebase, it's acceptable to scaffold unused enum variants in initial implementation PRs when they will be implemented in future PRs, as confirmed by findolor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fail Fast When Decoding Logs Missing Required Fields

3 participants