Skip to content
This repository has been archived by the owner on Jan 9, 2024. It is now read-only.

Bugfix/redis4 cluster initialization #293

Conversation

afshinrudgar
Copy link

Bug in redis cluster initialization with redis 4

Grokzen and others added 23 commits December 26, 2014 20:39
PUBSUB commands added to redis-py in 3f4ac6, which according to Redis docs (https://redis.io/commands/pubsub, https://redis.io/topics/pubsub) can be sent to any node
Added parsing callbacks for all three `PUBSUB` sub-commands, which give these commands the same behaviour as in single-node Redis & `redis-py`.

**Note** that the switches on each method to *not* aggregate responses into one appear to not be bubbling through to the callback.
…BSUB commands

``**kwargs`` is not bubbled through to all result callbacks to enable the toggle to reach the PUBSUB parsing methods in `utils.py`. This adds `**kwargs` to the `StrictRedisCluster._merge_result` method's signature.
* Using `string_keys_to_dict` for all RESULT_CALLBACKS
*  Simplified `subscribe` calls in test cases to remove unnecessary variable declarations
*  Remove change to `_random_good_hashslot_key` which was not related to PR
* Add `set_result_callback(command, callback)` to match Redis-Py feature `set_response_callback`
* Skipping redis-py versions below `2.10.6` for `PUBSUB` tests
* Add docstring to `TestPubSubPubSubSubcommands` class
* Adding docs on pubsub subcommands, and to release notes.
@Grokzen
Copy link
Owner

Grokzen commented Nov 12, 2018

@afshinrudgar Why open a second Pull request when you had one open? Also you have some problems with where you created your brnch from as you should not see all of these merged branch unstable commits in your pull request-

@Grokzen
Copy link
Owner

Grokzen commented Nov 12, 2018

@afshinrudgar Again, please read the last 2 comments in your previous pull request before you continue with this pull request and anser it. I am currently not inclined to merge this PR as it solves a problem in the wrong way/location. Please respond to either PR with why this should be done here and not solved with a proper depoyment of the redis-server. Without a good reason for that question, i probably won't merge this.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.06%) to 85.679% when pulling f1a60fc on afshinrudgar:bugfix/redis4-cluster-initialization into 813b934 on Grokzen:unstable.

4 similar comments
@coveralls
Copy link

Coverage Status

Coverage increased (+0.06%) to 85.679% when pulling f1a60fc on afshinrudgar:bugfix/redis4-cluster-initialization into 813b934 on Grokzen:unstable.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.06%) to 85.679% when pulling f1a60fc on afshinrudgar:bugfix/redis4-cluster-initialization into 813b934 on Grokzen:unstable.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.06%) to 85.679% when pulling f1a60fc on afshinrudgar:bugfix/redis4-cluster-initialization into 813b934 on Grokzen:unstable.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.06%) to 85.679% when pulling f1a60fc on afshinrudgar:bugfix/redis4-cluster-initialization into 813b934 on Grokzen:unstable.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.06%) to 85.679% when pulling f1a60fc on afshinrudgar:bugfix/redis4-cluster-initialization into 813b934 on Grokzen:unstable.

1 similar comment
@coveralls
Copy link

Coverage Status

Coverage increased (+0.06%) to 85.679% when pulling f1a60fc on afshinrudgar:bugfix/redis4-cluster-initialization into 813b934 on Grokzen:unstable.

@Grokzen Grokzen closed this Nov 13, 2018
@Grokzen
Copy link
Owner

Grokzen commented Nov 13, 2018

@afshinrudgar Next time you find an issue. Please open up an issue describing it before submitting a PR. If the issue is not really and issue or not relevant/related to this project, there is no need to put down any time in solving it, saving everyone time and effort :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants