Skip to content
This repository has been archived by the owner on Jan 9, 2024. It is now read-only.

Commit

Permalink
docs: fix a few simple typos
Browse files Browse the repository at this point in the history
There are small typos in:
- CONTRIBUTING.md
- docs/development.rst
- docs/index.rst
- docs/logging.rst
- docs/pubsub.rst
- docs/upgrading.rst
- rediscluster/client.py
- rediscluster/connection.py
- tests/test_cluster_node_manager.py
- tests/test_commands.py
- tests/test_encoding_cluster.py

Fixes:
- Should read `specified` rather than `specefied`.
- Should read `implementation` rather than `impelmentation`.
- Should read `upstream` rather than `uptream`.
- Should read `iterate` rather than `itterate`.
- Should read `recommended` rather than `reccommended`.
- Should read `received` rather than `recieved`.
- Should read `preferred` rather than `preffered`.
- Should read `possibility` rather than `posiblity`.
- Should read `interval` rather than `intervall`.
- Should read `examples` rather than `exmaples`.
- Should read `documentation` rather than `documentaion`.

Closes #461
  • Loading branch information
timgates42 authored and Grokzen committed Jun 25, 2021
1 parent d94f2fe commit f0627c9
Show file tree
Hide file tree
Showing 11 changed files with 27 additions and 27 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Expand Up @@ -88,7 +88,7 @@ All tests should be assumed to work against the test environment that is impleme

## Testing strategy and how to implement cluster specific tests

A new way of having the old upstream tests from redis-py combined with the cluster specific and unique tests that is needed to validate cluster functionality. This has been designed to improve the speed of which tests is updated from uptream as new redis-py releases is made and to make it easier to port them into the cluster variant.
A new way of having the old upstream tests from redis-py combined with the cluster specific and unique tests that is needed to validate cluster functionality. This has been designed to improve the speed of which tests is updated from upstream as new redis-py releases is made and to make it easier to port them into the cluster variant.

How do you implement a test for this code?

Expand Down
2 changes: 1 addition & 1 deletion docs/development.rst
Expand Up @@ -19,4 +19,4 @@ To start the local development server run from the root folder of this git repo
sphinx-autobuild docs docs/_build/html
Open up `localhost:8000` in your web-browser to view the online documentaion
Open up `localhost:8000` in your web-browser to view the online documentation
2 changes: 1 addition & 1 deletion docs/index.rst
Expand Up @@ -31,7 +31,7 @@ or from source code
Basic usage example
-------------------

Small sample script that shows how to get started with RedisCluster. It can also be found in the file `exmaples/basic.py`.
Small sample script that shows how to get started with RedisCluster. It can also be found in the file `examples/basic.py`.

Additional code examples of more advance functionality can be found in the `examples/` folder in the source code git repo.

Expand Down
2 changes: 1 addition & 1 deletion docs/logging.rst
Expand Up @@ -14,4 +14,4 @@ To setup logging for debugging inside the client during development you can add
logger.setLevel(logging.DEBUG)
logger.propagate = True
Note that this logging is not reccommended to be used inside production as it can cause a performance drain and a slowdown of your client.
Note that this logging is not recommended to be used inside production as it can cause a performance drain and a slowdown of your client.
4 changes: 2 additions & 2 deletions docs/pubsub.rst
Expand Up @@ -7,7 +7,7 @@ According to the current official redis documentation on `PUBLISH`::

Integer reply: the number of clients that received the message.

It was initially assumed that if we had clients connected to different nodes in the cluster it would still report back the correct number of clients that recieved the message.
It was initially assumed that if we had clients connected to different nodes in the cluster it would still report back the correct number of clients that received the message.

However after some testing of this command it was discovered that it would only report the number of clients that have subscribed on the same server the `PUBLISH` command was executed on.

Expand Down Expand Up @@ -60,7 +60,7 @@ This new solution is probably future safe and it will probably be a similar solu
Known limitations with pubsub
-----------------------------

Pattern subscribe and publish do not work properly because if we hash a pattern like `fo*` we will get a keyslot for that string but there is a endless posiblity of channel names based on that pattern that we can't know in advance. This feature is not limited but the commands is not recommended to use right now.
Pattern subscribe and publish do not work properly because if we hash a pattern like `fo*` we will get a keyslot for that string but there is a endless possibility of channel names based on that pattern that we can't know in advance. This feature is not limited but the commands is not recommended to use right now.

The implemented solution will only work if other clients use/adopt the same behaviour. If some other client behaves differently, there might be problems with `PUBLISH` and `SUBSCRIBE` commands behaving wrong.

Expand Down
2 changes: 1 addition & 1 deletion docs/upgrading.rst
Expand Up @@ -108,7 +108,7 @@ Added new `ClusterCrossSlotError` exception class.
Added optional `max_connections_per_node` parameter to `ClusterConnectionPool` which changes behavior of `max_connections` so that it applies per-node rather than across the whole cluster. The new feature is opt-in, and the existing default behavior is unchanged. Users are recommended to opt-in as the feature fixes two important problems. First is that some nodes could be starved for connections after max_connections is used up by connecting to other nodes. Second is that the asymmetric number of connections across nodes makes it challenging to configure file descriptor and redis max client settings.

Reinitialize on `MOVED` errors will not run on every error but instead on every
25 error to avoid excessive cluster reinitialize when used in multiple threads and resharding at the same time. If you want to go back to the old behaviour with reinitialize on every error you should pass in `reinitialize_steps=1` to the client constructor. If you want to increase or decrease the intervall of this new behaviour you should set `reinitialize_steps` in the client constructor to a value that you want.
25 error to avoid excessive cluster reinitialize when used in multiple threads and resharding at the same time. If you want to go back to the old behaviour with reinitialize on every error you should pass in `reinitialize_steps=1` to the client constructor. If you want to increase or decrease the interval of this new behaviour you should set `reinitialize_steps` in the client constructor to a value that you want.

Pipelines in general have received a lot of attention so if you are using pipelines in your code, ensure that you test the new code out a lot before using it to make sure it still works as you expect.

Expand Down
20 changes: 10 additions & 10 deletions rediscluster/client.py
Expand Up @@ -774,23 +774,23 @@ def cluster_addslots(self, node_id, *slots):
"""
Assign new hash slots to receiving node
Sends to specefied node
Sends to specified node
"""
return self.execute_command('CLUSTER ADDSLOTS', *slots, node_id=node_id)

def cluster_countkeysinslot(self, slot_id):
"""
Return the number of local keys in the specified hash slot
Send to node based on specefied slot_id
Send to node based on specified slot_id
"""
return self.execute_command('CLUSTER COUNTKEYSINSLOT', slot_id)

def cluster_count_failure_report(self, node_id):
"""
Return the number of failure reports active for a given node
Sends to specefied node
Sends to specified node
"""
return self.execute_command('CLUSTER COUNT-FAILURE-REPORTS', node_id=node_id)

Expand All @@ -812,7 +812,7 @@ def cluster_failover(self, node_id, option=None):
"""
Forces a slave to perform a manual failover of its master
Sends to specefied node
Sends to specified node
"""
if option:
if option.upper() not in ['FORCE', 'TAKEOVER']:
Expand Down Expand Up @@ -842,7 +842,7 @@ def cluster_meet(self, node_id, host, port):
"""
Force a node cluster to handshake with another node.
Sends to specefied node
Sends to specified node
"""
return self.execute_command('CLUSTER MEET', host, port, node_id=node_id)

Expand All @@ -858,7 +858,7 @@ def cluster_replicate(self, target_node_id):
"""
Reconfigure a node as a slave of the specified master node
Sends to specefied node
Sends to specified node
"""
return self.execute_command('CLUSTER REPLICATE', target_node_id)

Expand All @@ -869,7 +869,7 @@ def cluster_reset(self, node_id, soft=True):
If 'soft' is True then it will send 'SOFT' argument
If 'soft' is False then it will send 'HARD' argument
Sends to specefied node
Sends to specified node
"""
return self.execute_command('CLUSTER RESET', b'SOFT' if soft else b'HARD', node_id=node_id)

Expand Down Expand Up @@ -901,15 +901,15 @@ def cluster_save_config(self):

def cluster_get_keys_in_slot(self, slot, num_keys):
"""
Returns the number of keys in the specefied cluster slot
Returns the number of keys in the specified cluster slot
"""
return self.execute_command('CLUSTER GETKEYSINSLOT', slot, num_keys)

def cluster_set_config_epoch(self, node_id, epoch):
"""
Set the configuration epoch in a new node
Sends to specefied node
Sends to specified node
"""
return self.execute_command('CLUSTER SET-CONFIG-EPOCH', epoch, node_id=node_id)

Expand All @@ -918,7 +918,7 @@ def cluster_setslot(self, node_id, slot_id, state, bind_to_node_id=None):
"""
Bind an hash slot to a specific node
Sends to specefied node
Sends to specified node
"""
if state.upper() in ('IMPORTING', 'MIGRATING', 'NODE') and node_id is not None:
return self.execute_command('CLUSTER SETSLOT', slot_id, state, node_id)
Expand Down
4 changes: 2 additions & 2 deletions rediscluster/connection.py
Expand Up @@ -596,7 +596,7 @@ def get_connection_by_key(self, key, command):

def get_master_connection_by_slot(self, slot):
"""
Returns a connection for the Master node for the specefied slot.
Returns a connection for the Master node for the specified slot.
Do not return a random node if master node is not available for any reason.
"""
Expand All @@ -606,7 +606,7 @@ def get_master_connection_by_slot(self, slot):
def get_random_master_slave_connection_by_slot(self, slot):
"""
Returns a random connection from the set of (master + slaves) for the
specefied slot. If connection is not reachable then return a random connection.
specified slot. If connection is not reachable then return a random connection.
"""
self._checkpid()

Expand Down
6 changes: 3 additions & 3 deletions tests/test_cluster_node_manager.py
Expand Up @@ -158,7 +158,7 @@ def patch_execute_command(*args, **kwargs):

def test_empty_startup_nodes():
"""
It should not be possible to create a node manager with no nodes specefied
It should not be possible to create a node manager with no nodes specified
"""
with pytest.raises(RedisClusterException):
NodeManager()
Expand Down Expand Up @@ -252,7 +252,7 @@ def execute_command(*args, **kwargs):

def test_all_nodes():
"""
Set a list of nodes and it should be possible to itterate over all
Set a list of nodes and it should be possible to iterate over all
"""
n = NodeManager(startup_nodes=[{"host": "127.0.0.1", "port": 7000}])
n.initialize()
Expand All @@ -266,7 +266,7 @@ def test_all_nodes():
def test_all_nodes_masters():
"""
Set a list of nodes with random masters/slaves config and it shold be possible
to itterate over all of them.
to iterate over all of them.
"""
n = NodeManager(
startup_nodes=[
Expand Down
8 changes: 4 additions & 4 deletions tests/test_commands.py
Expand Up @@ -39,7 +39,7 @@ def cleanup():

def redis_server_time(client):
"""
Method adapted from uptream to return the server timestamp from the main
Method adapted from upstream to return the server timestamp from the main
cluster node that we assigned as port 7000 node.
This is not ideal but will be done for now.
"""
Expand Down Expand Up @@ -1987,15 +1987,15 @@ def test_cluster_slaves(self, mock_cluster_resp_slaves):
@skip_for_no_cluster_impl()
def test_readwrite(self, r):
"""
FIXME: Needs cluster impelmentation
FIXME: Needs cluster implementation
"""
assert r.readwrite()

@skip_if_server_version_lt('3.0.0')
@skip_for_no_cluster_impl()
def test_readonly_invalid_cluster_state(self, r):
"""
FIXME: Needs cluster impelmentation
FIXME: Needs cluster implementation
"""
with pytest.raises(exceptions.RedisError):
r.readonly()
Expand All @@ -2004,7 +2004,7 @@ def test_readonly_invalid_cluster_state(self, r):
@skip_for_no_cluster_impl()
def test_readonly(self, mock_cluster_resp_ok):
"""
FIXME: Needs cluster impelmentation
FIXME: Needs cluster implementation
"""
assert mock_cluster_resp_ok.readonly() is True

Expand Down
2 changes: 1 addition & 1 deletion tests/test_encoding_cluster.py
Expand Up @@ -12,7 +12,7 @@ class TestEncodingCluster(object):
We must import the entire class due to the seperate fixture that uses RedisCluster as client
class instead of the normal Redis instance.
FIXME: If possible, monkeypatching TestEncoding class would be preffered but kinda impossible in reality
FIXME: If possible, monkeypatching TestEncoding class would be preferred but kinda impossible in reality
"""
@pytest.fixture()
def r(self, request):
Expand Down

0 comments on commit f0627c9

Please sign in to comment.