Skip to content

Conversation

@MasterPtato
Copy link
Contributor

No description provided.

@vercel
Copy link

vercel bot commented Nov 18, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

4 Skipped Deployments
Project Deployment Preview Comments Updated (UTC)
rivet-cloud Ignored Ignored Nov 18, 2025 2:30am
rivet-inspector Ignored Ignored Preview Nov 18, 2025 2:30am
rivet-site Ignored Ignored Preview Nov 18, 2025 2:30am
rivetkit-serverless Skipped Skipped Nov 18, 2025 2:30am

Copy link
Contributor Author

MasterPtato commented Nov 18, 2025

Warning

This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more


How to use the Graphite Merge Queue

Add the label merge-queue to this PR to add it to the merge queue.

You must have a Graphite account in order to use the merge queue. Sign up using this link.

An organization admin has enabled the Graphite Merge Queue in this repository.

Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.

This stack of pull requests is managed by Graphite. Learn more about stacking.

@claude
Copy link

claude bot commented Nov 18, 2025

Code Review - PR #3476

Summary

This PR correctly changes StreamingMode::WantAll to StreamingMode::Exact for all range queries that specify an explicit limit parameter. This is an appropriate optimization based on FoundationDB's streaming mode semantics.

Positive Findings

Correctness

  • The change is semantically correct according to the FoundationDB documentation in engine/packages/universaldb/src/options.rs:193:
    • Exact: "A row limit must be specified if this mode is used"
    • WantAll: "Client intends to consume the entire range"
  • All modified locations specify explicit limits via limit: Some(...), making Exact the appropriate choice

Performance

  • Using Exact mode when you know the exact limit avoids unnecessary data transfer and batching overhead
  • This is particularly beneficial for the affected use cases:
    • download_instances.rs: Downloading a specific count of instances
    • recover_keys_chunk: Recovery operations with fixed batch sizes
    • list_names.rs: List operations with pagination limits
    • debug.rs: Debug queries checking for existence (limit: Some(1))

Consistency

  • I verified that remaining WantAll usages don't specify limits, confirming this PR caught all the relevant cases
  • The change is applied consistently across 6 files

Code Quality

  • One minor improvement in engine/packages/epoxy/src/workflows/replica/setup.rs:443-449: The struct fields were reordered (mode and limit moved before end and begin), which improves readability by grouping the critical query parameters together

Potential Considerations

Testing

  • Consider verifying that the behavior is unchanged for these operations, particularly:
    • download_instances - ensure instance downloads still work correctly
    • recover_keys_chunk - ensure recovery operations complete successfully
    • The debug queries with limit: Some(1) - these should be fast lookups

While the semantic change is minimal (both modes should return the same results when a limit is specified), the different batching behavior could theoretically expose edge cases.

Documentation

  • The PR title and description are minimal. Consider adding context about why this change improves performance (reducing unnecessary batching overhead when exact limits are known).

Verdict

LGTM - This is a well-targeted performance optimization with no apparent bugs or issues. The change correctly applies StreamingMode::Exact to all range queries with explicit limits, which aligns with FoundationDB best practices.

The only suggestion is to ensure adequate test coverage for the affected code paths, though this appears to be a low-risk refactoring.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants