Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid blocking the server in RedisCacheStore#delete_matched #32614

Closed
wants to merge 3 commits into from

Conversation

@glebm
Copy link
Contributor

@glebm glebm commented Apr 17, 2018

Fixes #32610

Lua scripts in redis are blocking, meaning that no other client can execute any commands while the script is running. See https://redis.io/commands/eval#atomicity-of-scripts.

This results in the following exceptions once the number of keys is sufficiently large:

BUSY Redis is busy running a script. You can only call SCRIPT KILL or SHUTDOWN NOSAVE.

This commit replaces the lua-based implementation with one that uses SCAN and DEL in batches. This doesn't block the server.

The primary limitation of SCAN, i.e. potential duplicate keys, is of no consequence here, because DEL ignores keys that do not exist.

Fixes rails#32610

Lua scripts in redis are *blocking*, meaning that no other client can
execute any commands while the script is running. See
https://redis.io/commands/eval#atomicity-of-scripts.

This results in the following exceptions once the number of keys is
sufficiently large:

    BUSY Redis is busy running a script.
    You can only call SCRIPT KILL or SHUTDOWN NOSAVE.

This commit replaces the lua-based implementation with one that uses
`SCAN` and `DEL` in batches. This doesn't block the server.

The primary limitation of `SCAN`, i.e. potential duplicate keys, is of
no consequence here, because `DEL` ignores keys that do not exist.
@rails-bot
Copy link

@rails-bot rails-bot commented Apr 17, 2018

Thanks for the pull request, and welcome! The Rails team is excited to review your changes, and you should hear from @georgeclaghorn (or someone else) soon.

If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes.

This repository is being automatically checked for code quality issues using Code Climate. You can see results for this analysis in the PR status below. Newly introduced issues should be fixed before a Pull Request is considered ready to review.

Please see the contribution instructions for more information.

@rafaelfranca rafaelfranca requested a review from jeremy Apr 17, 2018
private_constant :DELETE_GLOB_LUA
# The maximum number of entries to receive per SCAN call.
SCAN_BATCH_SIZE = 1000
private_constant :SCAN_BATCH_SIZE
Copy link
Member

@jeremy jeremy Apr 18, 2018

Is this specifically chosen? Should we measure it? Should we be able to tune it?

Copy link
Contributor Author

@glebm glebm Apr 18, 2018

It is arbitrary (we want a value that doesn't result in too many requests, while keeping the individual responses small, e.g. under 100 KiB), I don't think it's necessary to make it tuneable until somebody asks (at which point they can add an option).

start, keys = c.scan(start, match: pattern, count: SCAN_BATCH_SIZE)
c.del(*keys) unless keys.empty?
break if start == "0"
end
Copy link
Member

@jeremy jeremy Apr 18, 2018

Could move the break to a loop condition, e.g.

cursor = "0"

begin
  cursor, keys = c.scan(cursor, )
  c.del(*keys) if keys.any?
end until cursor == "0"

Copy link
Contributor Author

@glebm glebm Apr 18, 2018

Done

start = "0"
pattern = namespace_key(matcher, options)
# Fetch keys in batches using SCAN to avoid blocking the Redis server.
while true
Copy link
Member

@jeremy jeremy Apr 18, 2018

Can we satisfy the https://redis.io/commands/scan#guarantee-of-termination scenario of never terminating if matching keys are being added more quickly than we can consume them?

Copy link
Contributor Author

@glebm glebm Apr 18, 2018

In theory, yes (no way it around that I know of).

In practice, I don't think that's possible:

  1. SCAN is fast, 1 million records in less than a second on 2011 consumer laptop hardware.
  2. Generating cache keys is not fast (requires doing something that needs caching first) and at least 1,000 cache writes per second would need to be happening assuming 1 million keys and 2011-level laptop hardware (likely many more, because modern server hardware is faster).

This is not even taking into account that SCAN is not guaranteed to return all the elements added during iteration.

raise ArgumentError, "Only Redis glob strings are supported: #{matcher.inspect}"
end
redis.with do |c|
start = "0"
Copy link
Member

@jeremy jeremy Apr 18, 2018

Redis SCAN calls this the cursor. Nice to stick with upstream terminology.

Copy link
Contributor Author

@glebm glebm Apr 18, 2018

Done

jeremy
jeremy approved these changes Apr 18, 2018
Copy link
Member

@jeremy jeremy left a comment

Great work @glebm. Thank you for implementing this!

@jeremy jeremy closed this in ef2af62 Apr 18, 2018
@glebm glebm deleted the redis-delete-matched branch Apr 18, 2018
@glebm
Copy link
Contributor Author

@glebm glebm commented Apr 19, 2018

@jeremy Thanks for reviewing! Can you also backport this to the previous Rails versions? I expect it will apply with no conflicts.

bogdanvlviv added a commit to bogdanvlviv/rails that referenced this issue Apr 19, 2018
Fixes rails#32610. Closes rails#32614.

Lua scripts in redis are *blocking*, meaning that no other client can
execute any commands while the script is running. See
https://redis.io/commands/eval#atomicity-of-scripts.

This results in the following exceptions once the number of keys is
sufficiently large:

    BUSY Redis is busy running a script.
    You can only call SCRIPT KILL or SHUTDOWN NOSAVE.

This commit replaces the lua-based implementation with one that uses
`SCAN` and `DEL` in batches. This doesn't block the server.

The primary limitation of `SCAN`, i.e. potential duplicate keys, is of
no consequence here, because `DEL` ignores keys that do not exist.

cherry-pick ef2af62
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

4 participants