-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-blocking Redis calls: preparation (import libs, write async client, adapt tests, etc.) #77
Changes from all commits
b66918d
d0cda35
b988de1
babe251
15c9d32
f687302
b4dd9a5
2e7ee3e
5e09973
0dbb2d5
60850db
1e0571c
66c30da
b636e86
9970253
215f986
e065c65
c65546d
d9308a6
8572f5e
1d90d27
a59aacd
39cf772
0cef4d6
9035dbd
ba5a046
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1 @@ | ||
2.2.4 | ||
2.3.6 |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -94,7 +94,7 @@ def all_by_service_and_app(service_id, app_id, user_id = nil) | |
Token.from_value token, service_id, value, ttl | ||
end | ||
end | ||
.force.tap do | ||
.tap do | ||
# delete expired tokens (nil values) from token set | ||
deltokens.each_slice(TOKEN_MAX_REDIS_SLICE_SIZE) do |delgrp| | ||
storage.srem token_set, delgrp | ||
|
@@ -185,10 +185,7 @@ def remove_whole_token_set(token_set, service_id) | |
# TODO: provide a SSCAN interface with lazy enums because SMEMBERS | ||
# is prone to DoSing and timeouts | ||
def tokens_from(token_set) | ||
# It is important that we make this a lazy enumerator. The | ||
# laziness is maintained until some enumerator forces execution or | ||
# the caller calls 'to_a' or 'force', whichever happens first. | ||
storage.smembers(token_set).lazy | ||
storage.smembers(token_set) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So, the commit message is just wrong.
If See, before the call to
After the call to
Ok, so this is to clarify, there are 2 effects at play for minimising both CPU and memory usage, which essentially correspond to external and internal fragmentation:
The 1st is interesting in its own right (but we haven't yet implemented it), certainly the most interesting one. But this is removing the 2nd, and we really need to find a good reason to do so (it might be that we aren't taking advantage of the laziness, though, I just don't know at this point). The important thing, though, if the justification exists, is that all users need to be checked in how they use the lazy enumerator to see whether they can terminate early (and if so, how big would the estimated impact be, given sets could be huge). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I understand how First of all, this code is deprecated. It's only used in the endpoints that we offer to manage OAuth tokens which are deprecated. I spent some time trying to debug the issue, but to be honest, I was not able to fully understand why Regarding your points about external/internal fragmentation:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @davidor re: SSCAN happening - well, it can still happen and the fact this is "deprecated" does not mean There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I know. I added a note in the TODO list above to document this in the future. We might find other limitations in future PRs so I'll document all of them later. |
||
end | ||
|
||
def tokens_n_keys(token_set, service_id) | ||
|
@@ -198,15 +195,14 @@ def tokens_n_keys(token_set, service_id) | |
Key.for token, service_id | ||
end | ||
end | ||
# Note: this is returning two lazy enumerators | ||
|
||
[token_groups, key_groups] | ||
end | ||
|
||
# Provides grouped data (as sourced from the lazy iterators) which | ||
# matches respectively in each array position, ie. 1st group of data | ||
# contains a group of tokens, keys and values with ttls, and | ||
# position N of the tokens group has key in position N of the keys | ||
# group, and so on. | ||
# Provides grouped data which matches respectively in each array | ||
# position, ie. 1st group of data contains a group of tokens, keys | ||
# and values with ttls, and position N of the tokens group has key | ||
# in position N of the keys group, and so on. | ||
# | ||
# [[[token group], [key group], [value_with_ttls_group]], ...] | ||
# | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No explanation of the actual problem in the commit message. 👎
This is important, can you please explain it? I will comment below what's wrong with removing this (although I don't mean it cannot or should not be removed, depends on our usage).