Permalink
Browse files

Merge pull request #201 from michael-grunder/master

Fixed a typo
  • Loading branch information...
2 parents f9a57f3 + dc42723 commit 40a462727b73da731fda8e5679264993d7b29177 @djanowski djanowski committed Feb 14, 2013
Showing with 3 additions and 3 deletions.
  1. +2 −2 topics/mass-insert.md
  2. +1 −1 topics/memory-optimization.md
View
@@ -2,7 +2,7 @@ Redis Mass Insertion
===
Sometimes Redis instances needs to be loaded with big amount of preexisting
-or user generated data in a short amount of time, so that million of keys
+or user generated data in a short amount of time, so that millions of keys
will be created as fast as possible.
This is called a *mass insertion*, and the goal of this document is to
@@ -13,7 +13,7 @@ Use the protocol, Luke
Using a normal Redis client to perform mass insertion is not a good idea
for a few reasons: the naive approach of sending one command after the other
-is slow because there is to pay the round trip time for every command.
+is slow because you have to pay for the round trip time for every command.
It is possible to use pipelining, but for mass insertion of many records
you need to write new commands while you read replies at the same time to
make sure you are inserting as fast as possible.
@@ -8,7 +8,7 @@ Since Redis 2.2 many data types are optimized to use less space up to a certain
This is completely transparent from the point of view of the user and API.
Since this is a CPU / memory trade off it is possible to tune the maximum number of elements and maximum element size for special encoded types using the following redis.conf directives.
- hash-max-zipmap-entries 64 (hahs-max-ziplist-entries for Redis >= 2.6)
+ hash-max-zipmap-entries 64 (hash-max-ziplist-entries for Redis >= 2.6)
hash-max-zipmap-value 512 (hash-max-ziplist-value for Redis >= 2.6)
list-max-ziplist-entries 512
list-max-ziplist-value 64

0 comments on commit 40a4627

Please sign in to comment.