Skip to content

Commit

Permalink
Fix docstring for start/finish_token formats
Browse files Browse the repository at this point in the history
The tokens only need to be hex encoded when using
ByteOrderedPartitioner.
  • Loading branch information
thobbs committed Sep 17, 2013
1 parent 30bee0c commit fd21960
Showing 1 changed file with 6 additions and 3 deletions.
9 changes: 6 additions & 3 deletions pycassa/columnfamily.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
try:
from collections import OrderedDict
except ImportError:
from pycassa.util import OrderedDict
from pycassa.util import OrderedDict # NOQA

__all__ = ['gm_timestamp', 'ColumnFamily', 'PooledColumnFamily']

Expand Down Expand Up @@ -894,8 +894,11 @@ def get_range(self, start="", finish="", columns=None, column_start="",
case, you are specifying a token range to fetch instead of a key
range. This can be useful for fetching all data owned
by a node or for parallelizing a full data set scan. Otherwise,
you should typically just use `start` and `finish`. Both `start_token`
and `finish_token` must be specified as hex-encoded strings.
you should typically just use `start` and `finish`. When using
RandomPartitioner or Murmur3Partitioner, `start_token`
and `finish_token` should be string versions of the numeric tokens;
for ByteOrderedPartitioner, they should be hex-encoded string versions
of the token.
The `row_count` parameter limits the total number of rows that may be
returned. If left as ``None``, the number of rows that may be returned
Expand Down

0 comments on commit fd21960

Please sign in to comment.