Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a configurable QueryTimeout across node configs and the RPC client, wires the timeout into RPC calls and call sites, improves transient query error handling, and updates several dependencies and e2e Docker image tags. Changes
Sequence DiagramsequenceDiagram
actor App as Application
participant Cfg as Config
participant Node as Node / RPCClient
participant Provider as Provider (host/child/tx)
participant RPC as Remote RPC Endpoint
App->>Cfg: load config (includes QueryTimeout)
Cfg->>Node: initialize node (pass QueryTimeout)
Node->>Node: SetQueryTimeout(duration)
Provider->>Node: request query (uses RPC client)
Node->>Node: GetQueryContextWithTimeout(ctx, height, QueryTimeout)
Node->>RPC: execute query with timeout-aware ctx
RPC-->>Node: response or error
Node-->>Provider: return result or timed error
Provider-->>App: deliver processed result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
provider/host/query.go (1)
320-342:⚠️ Potential issue | 🟡 MinorMissing context cancellation check in pagination loop.
The
QueryValidatorsfunction creates a timeout context at line 322 but the pagination loop (lines 329-338) does not checkctx.Done(). If the timeout fires mid-pagination, the loop will continue making requests until it naturally completes or hits an error.Compare with
QueryBatchInfos(line 158-162) andQueryDepositTxHeight(line 202-206) which properly checkctx.Done()in their loops.🛡️ Proposed fix to add context check
func (b BaseHost) QueryValidators(ctx context.Context, height int64) ([]*cmtypes.Validator, error) { ctx, cancel := rpcclient.GetQueryContextWithTimeout(ctx, 0, b.node.GetRPCClient().QueryTimeout()) defer cancel() page := 1 perPage := 100 validators := make([]*cmtypes.Validator, 0) for { + select { + case <-ctx.Done(): + return nil, ctx.Err() + default: + } + result, err := b.node.GetRPCClient().Validators(ctx, &height, &page, &perPage) if err != nil { return nil, err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@provider/host/query.go` around lines 320 - 342, The pagination loop in QueryValidators does not check for context cancellation; inside the for loop (where you call b.node.GetRPCClient().Validators and append to validators) add a non-blocking check for ctx.Done() and if closed return nil with ctx.Err() (or propagate the context error) to stop further RPC calls when the timeout/cancellation fires; keep existing pagination logic (page, perPage, result.Total) but bail out early on context cancellation.e2e/go.mod (1)
31-31:⚠️ Potential issue | 🟠 MajorUpgrade
github.com/gofiber/fiber/v2from v2.52.5 to v2.52.11 to fix critical security vulnerabilities.The current version v2.52.5 is vulnerable to three CVEs:
- CVE-2025-48075 (fixed in v2.52.7):
BodyParserpanic on negative range index- CVE-2025-54801 / GHSA-qx2q-88mx-vhg7 (fixed in v2.52.9):
BodyParsercrash via unvalidated large slice index (DoS)- CVE-2025-66630 / GHSA-68rr-p4fp-j59v (fixed in v2.52.11): Insecure UUID fallback to predictable all-zero UUID on Go < 1.24 (affects session/token security)
All are patched in v2.52.11 (latest, published Jan 31, 2026). While marked as indirect in go.mod, the module directly requires this dependency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@e2e/go.mod` at line 31, Update the indirect dependency declaration for github.com/gofiber/fiber/v2 in e2e/go.mod from v2.52.5 to v2.52.11 to pick up security patches (CVE-2025-48075, CVE-2025-54801, CVE-2025-66630), then run go get github.com/gofiber/fiber/v2@v2.52.11 (or go mod tidy) to update go.sum and vendor as needed and verify the project builds/tests (look for the dependency line "github.com/gofiber/fiber/v2" in the file and ensure it shows v2.52.11).
🧹 Nitpick comments (2)
provider/child/query.go (1)
83-120: Potential timeout issue with paginated query.The timeout context is created once at line 90 before the pagination loop, meaning the configured timeout applies to the entire paginated operation. If the query has many pages or the node is slow, the operation may time out before completing all pages.
This may be intentional (cap total operation time), but consider whether you need a per-page timeout or a longer timeout for paginated queries.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@provider/child/query.go` around lines 83 - 120, The QueryGranteeGrants function creates a single timeout context (rpcclient.GetQueryContextWithTimeout) before the pagination loop causing the whole multi-page fetch to be subject to that single timeout; change it to create a per-request context inside the loop (call rpcclient.GetQueryContextWithTimeout at the start of each iteration, use the returned ctx when calling authzClient.GranteeGrants, and call the corresponding cancel after each request) so each page request gets its own timeout rather than the entire paginated operation being constrained by one shared ctx; keep the ticker and pagination logic (req.Pagination.Key = res.Pagination.NextKey, break when NextKey == nil) unchanged.executor/child/withdraw.go (1)
142-154: Consider making the "eof" match more specific.The substring match for
"eof"is quite short and could potentially match unintended content in error messages (though unlikely in practice). Consider using"eof"at word boundaries or checking for more specific patterns like" eof","eof:", or the full"unexpected eof".♻️ Suggested refinement
msg := strings.ToLower(err.Error()) return strings.Contains(msg, "context deadline exceeded") || strings.Contains(msg, "post failed") || strings.Contains(msg, "i/o timeout") || strings.Contains(msg, "connection refused") || strings.Contains(msg, "connection reset by peer") || - strings.Contains(msg, "eof") + strings.Contains(msg, "unexpected eof") || + strings.HasSuffix(msg, "eof")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@executor/child/withdraw.go` around lines 142 - 154, The current isRetryableQueryErr function uses a very short substring "eof" which can match unintended text; update isRetryableQueryErr to detect EOF more specifically by matching patterns like " unexpected eof", " eof:", "eof:" or the full token "unexpected eof" (or use a word-boundary regex) instead of a bare "eof" so only true EOF errors are considered retryable; adjust the check alongside the other conditions (keep the other substrings intact) and use strings.Contains or a compiled regexp inside isRetryableQueryErr to perform the refined match.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@node/types/config.go`:
- Around line 31-34: Add a guard that ensures the QueryTimeout field is
non‑negative: locate the config struct that declares QueryTimeout and add
validation in its existing Validate() method (or create one if missing) to
return an error when QueryTimeout < 0; if you have a constructor like NewConfig
or an Unmarshal/Load path, enforce the same check there and return a clear error
message referencing QueryTimeout so callers cannot set a negative duration.
---
Outside diff comments:
In `@e2e/go.mod`:
- Line 31: Update the indirect dependency declaration for
github.com/gofiber/fiber/v2 in e2e/go.mod from v2.52.5 to v2.52.11 to pick up
security patches (CVE-2025-48075, CVE-2025-54801, CVE-2025-66630), then run go
get github.com/gofiber/fiber/v2@v2.52.11 (or go mod tidy) to update go.sum and
vendor as needed and verify the project builds/tests (look for the dependency
line "github.com/gofiber/fiber/v2" in the file and ensure it shows v2.52.11).
In `@provider/host/query.go`:
- Around line 320-342: The pagination loop in QueryValidators does not check for
context cancellation; inside the for loop (where you call
b.node.GetRPCClient().Validators and append to validators) add a non-blocking
check for ctx.Done() and if closed return nil with ctx.Err() (or propagate the
context error) to stop further RPC calls when the timeout/cancellation fires;
keep existing pagination logic (page, perPage, result.Total) but bail out early
on context cancellation.
---
Nitpick comments:
In `@executor/child/withdraw.go`:
- Around line 142-154: The current isRetryableQueryErr function uses a very
short substring "eof" which can match unintended text; update
isRetryableQueryErr to detect EOF more specifically by matching patterns like "
unexpected eof", " eof:", "eof:" or the full token "unexpected eof" (or use a
word-boundary regex) instead of a bare "eof" so only true EOF errors are
considered retryable; adjust the check alongside the other conditions (keep the
other substrings intact) and use strings.Contains or a compiled regexp inside
isRetryableQueryErr to perform the refined match.
In `@provider/child/query.go`:
- Around line 83-120: The QueryGranteeGrants function creates a single timeout
context (rpcclient.GetQueryContextWithTimeout) before the pagination loop
causing the whole multi-page fetch to be subject to that single timeout; change
it to create a per-request context inside the loop (call
rpcclient.GetQueryContextWithTimeout at the start of each iteration, use the
returned ctx when calling authzClient.GranteeGrants, and call the corresponding
cancel after each request) so each page request gets its own timeout rather than
the entire paginated operation being constrained by one shared ctx; keep the
ticker and pagination logic (req.Pagination.Key = res.Pagination.NextKey, break
when NextKey == nil) unchanged.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
provider/host/query.go (2)
148-162:⚠️ Potential issue | 🟠 MajorTimeout context may expire during pagination loop.
The timeout context is created once before the loop, but the loop can iterate multiple times with
PollingInterval()delays between iterations. IfQueryTimeoutis shorter than(pages × PollingInterval), the context will cancel mid-pagination, potentially returning incomplete results.Consider either:
- Moving the timeout context creation inside the loop for per-request timeouts, or
- Using the parent
botCtxfor loop control and creating a fresh timeout context per RPC call.Proposed fix: per-request timeout
func (b BaseHost) QueryBatchInfos(botCtx types.Context, bridgeId uint64) ([]ophosttypes.BatchInfoWithOutput, error) { - ctx, cancel := rpcclient.GetQueryContextWithTimeout(botCtx, 0, b.node.GetRPCClient().QueryTimeout()) - defer cancel() - ticker := time.NewTicker(botCtx.PollingInterval()) defer ticker.Stop() var batchInfos []ophosttypes.BatchInfoWithOutput var nextKey []byte for { select { - case <-ctx.Done(): - return nil, ctx.Err() + case <-botCtx.Done(): + return nil, botCtx.Err() case <-ticker.C: } req := &ophosttypes.QueryBatchInfosRequest{ BridgeId: bridgeId, Pagination: &query.PageRequest{ Limit: 100, Key: nextKey, }, } + ctx, cancel := rpcclient.GetQueryContextWithTimeout(botCtx, 0, b.node.GetRPCClient().QueryTimeout()) res, err := b.ophostQueryClient.BatchInfos(ctx, req) + cancel() if err != nil { return nil, err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@provider/host/query.go` around lines 148 - 162, The current BaseHost.QueryBatchInfos creates a single timeout context via rpcclient.GetQueryContextWithTimeout before the pagination loop, causing ctx to potentially expire mid-pagination; change to create a fresh per-request timeout context inside the loop (i.e., call rpcclient.GetQueryContextWithTimeout at the start of each iteration before the RPC call that uses nextKey) and ensure you call cancel() for each created context to avoid leaks; alternatively, if you prefer global control, drive the loop with the parent botCtx and create a short-lived timeout context around each RPC call — update uses of ctx in QueryBatchInfos accordingly so each page fetch has its own timeout tied to QueryTimeout and respects botCtx cancellation.
184-206:⚠️ Potential issue | 🟠 MajorSame timeout scoping issue as
QueryBatchInfos.The timeout context created at line 192 applies to the entire pagination loop. With ticker delays consuming timeout budget, the context may cancel before all pages are fetched.
Apply the same per-request timeout pattern suggested for
QueryBatchInfos.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@provider/host/query.go` around lines 184 - 206, The timeout context used in QueryDepositTxHeight (created via rpcclient.GetQueryContextWithTimeout and assigned to ctx/cancel before the pagination loop) currently covers the entire pagination loop and can expire due to ticker delays; change this to create a fresh per-request context inside the for page loop (i.e., call rpcclient.GetQueryContextWithTimeout(...) at the start of each page iteration and defer/ensure cancel() immediately after the single request completes) so each page query has its own timeout budget while keeping the outer loop logic, ticker handling, and use of ophosttypes.EventTypeInitiateTokenDeposit / ophosttypes.AttributeKeyL1Sequence intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@provider/host/query.go`:
- Around line 148-162: The current BaseHost.QueryBatchInfos creates a single
timeout context via rpcclient.GetQueryContextWithTimeout before the pagination
loop, causing ctx to potentially expire mid-pagination; change to create a fresh
per-request timeout context inside the loop (i.e., call
rpcclient.GetQueryContextWithTimeout at the start of each iteration before the
RPC call that uses nextKey) and ensure you call cancel() for each created
context to avoid leaks; alternatively, if you prefer global control, drive the
loop with the parent botCtx and create a short-lived timeout context around each
RPC call — update uses of ctx in QueryBatchInfos accordingly so each page fetch
has its own timeout tied to QueryTimeout and respects botCtx cancellation.
- Around line 184-206: The timeout context used in QueryDepositTxHeight (created
via rpcclient.GetQueryContextWithTimeout and assigned to ctx/cancel before the
pagination loop) currently covers the entire pagination loop and can expire due
to ticker delays; change this to create a fresh per-request context inside the
for page loop (i.e., call rpcclient.GetQueryContextWithTimeout(...) at the start
of each page iteration and defer/ensure cancel() immediately after the single
request completes) so each page query has its own timeout budget while keeping
the outer loop logic, ticker handling, and use of
ophosttypes.EventTypeInitiateTokenDeposit / ophosttypes.AttributeKeyL1Sequence
intact.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
provider/host/query.go
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
provider/host/query.go (1)
149-175:⚠️ Potential issue | 🟠 MajorTimeout scope should be per-RPC call, not the entire pagination loop.
Creating the timeout context once means it can expire mid-pagination (or during ticker waits), causing later requests to fail immediately even though
botCtxis still active. Prefer a fresh timeout context inside each iteration.🛠️ Suggested fix (per-call timeout context inside the loop)
- ctx, cancel := rpcclient.GetQueryContextWithTimeout(botCtx, 0, b.node.GetRPCClient().QueryTimeout()) - defer cancel() - ticker := time.NewTicker(botCtx.PollingInterval()) defer ticker.Stop() @@ - case <-ctx.Done(): - return nil, ctx.Err() + case <-botCtx.Done(): + return nil, botCtx.Err() case <-ticker.C: } @@ - res, err := b.ophostQueryClient.BatchInfos(ctx, req) + requestCtx, cancel := rpcclient.GetQueryContextWithTimeout(botCtx, 0, b.node.GetRPCClient().QueryTimeout()) + res, err := b.ophostQueryClient.BatchInfos(requestCtx, req) + cancel() if err != nil { return nil, err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@provider/host/query.go` around lines 149 - 175, The timeout context is created once via rpcclient.GetQueryContextWithTimeout (ctx, cancel) before the pagination loop, so it can expire during ticker waits and cause later BatchInfos calls to fail; move creation of the per-RPC timeout context inside the loop immediately before calling b.ophostQueryClient.BatchInfos (create a fresh ctx/cancel from botCtx each iteration using GetQueryContextWithTimeout with the same QueryTimeout), use that ctx for the RPC, then call cancel() after the RPC completes (and keep the outer ticker logic and nextKey/pagination handling intact).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@provider/host/query.go`:
- Around line 149-175: The timeout context is created once via
rpcclient.GetQueryContextWithTimeout (ctx, cancel) before the pagination loop,
so it can expire during ticker waits and cause later BatchInfos calls to fail;
move creation of the per-RPC timeout context inside the loop immediately before
calling b.ophostQueryClient.BatchInfos (create a fresh ctx/cancel from botCtx
each iteration using GetQueryContextWithTimeout with the same QueryTimeout), use
that ctx for the RPC, then call cancel() after the RPC completes (and keep the
outer ticker logic and nextKey/pagination handling intact).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
provider/host/query.go
Summary by CodeRabbit
New Features
Bug Fixes
Chores