-
Notifications
You must be signed in to change notification settings - Fork 533
RUBY-2961 Treat 0 limit as no limit and negative limit as positive limit in query caching #2452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -179,7 +179,8 @@ def set(cursor, **opts) | |
| # | ||
| # @api private | ||
| def get(**opts) | ||
| limit = opts[:limit] | ||
| limit = normalized_limit(opts[:limit]) | ||
|
|
||
| _namespace_key = namespace_key(**opts) | ||
| _cache_key = cache_key(**opts) | ||
|
|
||
|
|
@@ -189,7 +190,7 @@ def get(**opts) | |
| caching_cursor = namespace_hash[_cache_key] | ||
| return nil unless caching_cursor | ||
|
|
||
| caching_cursor_limit = caching_cursor.view.limit | ||
| caching_cursor_limit = normalized_limit(caching_cursor.view.limit) | ||
|
|
||
| # There are two scenarios in which a caching cursor could fulfill the | ||
| # query: | ||
|
|
@@ -199,6 +200,7 @@ def get(**opts) | |
| # | ||
| # Otherwise, return nil because the stored cursor will not satisfy | ||
| # the query. | ||
|
|
||
| if limit && (caching_cursor_limit.nil? || caching_cursor_limit >= limit) | ||
| caching_cursor | ||
| elsif limit.nil? && caching_cursor_limit.nil? | ||
|
|
@@ -208,6 +210,14 @@ def get(**opts) | |
| end | ||
| end | ||
|
|
||
| def normalized_limit(limit) | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this would be more at home under Cursor somewhere, thoughts?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Though a limit doesn't require creation of a cursor. I would prefer collection to not reference QueryCache for implementation parts.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I did consider CachingCursor exposing a method to get a limited set of values and handling the normalization internally there, some how. Didn't get that far though, happy to attempt it |
||
| return nil unless limit | ||
| # For the purposes of caching, a limit of 0 means no limit, as mongo treats it as such. | ||
| return nil if limit == 0 | ||
| # For the purposes of caching, a negative limit is the same as as a positive limit. | ||
| limit.abs | ||
| end | ||
|
|
||
| private | ||
|
|
||
| def cache_key(**opts) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't comment on line 89 because it's not in the diff but I'll ask on this one..
If we call
eachwithout a block, it'll just return anEnumerable, which won't have the cached limit applied.This case probably should be handled by always doing the positive case, and then calling
to_enumon the result of itThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line 79 could end up being something like
Just ideas at them moment, let me know what you think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @p-mongo (not sure if you'd normally get notifications for these comments on someone elses PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do @mikebaldry , but thank you for the highlight. We'll discuss this PR with the team to figure out the best course of action.