-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clusters should optionally require full slot coverage #1845
Conversation
…on, Changed RedisClient's 'require_full_coverage' default value to false
Codecov Report
@@ Coverage Diff @@
## master #1845 +/- ##
==========================================
- Coverage 93.64% 93.60% -0.05%
==========================================
Files 76 76
Lines 16207 16199 -8
==========================================
- Hits 15177 15163 -14
- Misses 1030 1036 +6
Continue to review full report at Codecov.
|
@@ -1450,29 +1419,9 @@ def initialize(self): | |||
# isn't a full coverage | |||
raise RedisClusterException( | |||
f"All slots are not covered after query all startup_nodes. " | |||
f"{len(self.slots_cache)} of {REDIS_CLUSTER_HASH_SLOTS} " | |||
f"{len(tmp_slots)} of {REDIS_CLUSTER_HASH_SLOTS} " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small bug fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense - limit based on available as opposed to the entire cluster. This should recover more cleanly.
@@ -1450,29 +1419,9 @@ def initialize(self): | |||
# isn't a full coverage | |||
raise RedisClusterException( | |||
f"All slots are not covered after query all startup_nodes. " | |||
f"{len(self.slots_cache)} of {REDIS_CLUSTER_HASH_SLOTS} " | |||
f"{len(tmp_slots)} of {REDIS_CLUSTER_HASH_SLOTS} " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense - limit based on available as opposed to the entire cluster. This should recover more cleanly.
…on, Changed RedisClient's 'require_full_coverage' default value to false
Pull Request check-list
Please make sure to review and check all of these items:
$ tox
pass with this change (including linting)?NOTE: these things are not required to open a PR and can be done
afterwards / while the PR is open.
Description of change
In order to avoid the client being a limiting factor for communicating with the server, we should not require full slots coverage by default in the client side.
require_full_coverage parameter:
When set to False (default value): the client will not require a
full coverage of the slots. However, if not all slots are covered,
and at least one node has 'cluster-require-full-coverage' set to
'yes,' the server will throw a ClusterDownError for some key-based
commands. See -
https://redis.io/topics/cluster-tutorial#redis-cluster-configuration-parameters
When set to True: all slots must be covered to construct the
cluster client. If not all slots are covered, RedisClusterException
will be thrown.
The 'skip_full_coverage_check' parameter has been removed from RedisClient, and the client no longer checks for 'cluster-require-full-coverage' configuration on the cluster nodes (note - not backward compatible). This check doesn't need to be run since it's an overhead for the client and is already handled by the server's ClusterDown error. Additionally, some managed Redis clusters, such as ElastiCache, don't support the 'config' command, so the client will not work with the default configurations if we keep the node's configuration check. Other clients (e.g. Lettuce, Jedis) doesn't run this check as well.