-
Notifications
You must be signed in to change notification settings - Fork 533
RUBY-2961 Treat 0 limit as no limit and negative limit as positive limit in query caching #2452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
p-mongo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very thorough, thank you for your work @mikebaldry
| end | ||
| end | ||
|
|
||
| def normalized_limit(limit) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be more at home under Cursor somewhere, thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though a limit doesn't require creation of a cursor.
I would prefer collection to not reference QueryCache for implementation parts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did consider CachingCursor exposing a method to get a limited set of values and handling the normalization internally there, some how. Didn't get that far though, happy to attempt it
| limit_for_cached_query = respond_to?(:limit) ? QueryCache.normalized_limit(limit) : nil | ||
| end | ||
|
|
||
| if block_given? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't comment on line 89 because it's not in the diff but I'll ask on this one..
If we call each without a block, it'll just return an Enumerable, which won't have the cached limit applied.
This case probably should be handled by always doing the positive case, and then calling to_enum on the result of it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line 79 could end up being something like
result = if use_query_cache?
@cursor.limited_to(limit) # is there a case where limit doesn't exist? CachingCursor inherits Cursor which implements limit, so don't see why we have the `respond_to?(:limit)` check
else
@cursor
end
if block_given?
# iterate result and yield
else
result.to_enum
endJust ideas at them moment, let me know what you think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @p-mongo (not sure if you'd normally get notifications for these comments on someone elses PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do @mikebaldry , but thank you for the highlight. We'll discuss this PR with the team to figure out the best course of action.
|
The issue in now under the driver project - https://jira.mongodb.org/browse/RUBY-2961 |
1 similar comment
|
The issue in now under the driver project - https://jira.mongodb.org/browse/RUBY-2961 |
p-mongo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 to merge as is per today's discussion.
…mit in query caching (#2452)
…mit in query caching (#2452)
Related to https://jira.mongodb.org/browse/MONGOID-5285
MongoDB treats a
limitof0as having no limit at all but the query caching does not take this in to consideration, returning 0 records, nor using existing cached items with a 0 limit when they could be used.MongoDB treats a negative
limitthe same as a positive limit, except it signifies that if there aren't enough documents in the batch, the client will not callgetMore. The query cache was using ruby negative array indexes giving back weird things