Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix typos #12

Merged
merged 1 commit into from Sep 15, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
14 changes: 7 additions & 7 deletions articles/advanced_options.md
Expand Up @@ -42,17 +42,17 @@ to instantiate the policy:
(cp/exponential-reconnection-policy 100 1000)
```

### Constant reconnectoin policy
### Constant reconnection policy

Constant reconnectoin policy waits for the fixed period of time
Constant reconnection policy waits for the fixed period of time
between reconnection attempts.

`clojurewerkz.cassaforte.policies/constant-reconnection-policy` is used
to instantiate the policy:


```clojure
(client/constant-reconnection-policy 100)
(cp/constant-reconnection-policy 100)
```

The policy above will wait for 100 milliseconds between reconnection
Expand Down Expand Up @@ -114,7 +114,7 @@ level for query to succeed. It will retry queries in two cases:
* __on timed out read__, if enough replicas responded, but data still was not retrieved, which
usually means that some of the nodes chosen by coordinator are dead but were not detected
as such just yet.
* __on timed out write__, only if it occured during writing to distributed batch log. It is very likely that
* __on timed out write__, only if it occurred during writing to distributed batch log. It is very likely that
coordinator picked unresponsive nodes that were not yet detected as dead.

Use `clojurewerkz.cassaforte.policies/retry-policy` to pick a policy by name:
Expand All @@ -129,7 +129,7 @@ Use `clojurewerkz.cassaforte.policies/retry-policy` to pick a policy by name:

Under some circumstances, it makes sense to tune the consistency level
for the subsequent write. This sacrifices consistency for
availability. Operation will still be considered as sucessful, even
availability. Operation will still be considered as successful, even
though smaller amount of replicas were used for the operation.

For cases like that, you may use the downgrading policy. It will retry query:
Expand All @@ -142,9 +142,9 @@ For cases like that, you may use the downgrading policy. It will retry query:
* if coordinator node notices that there's __not enough replicas__ alive to satisfy query, execute
same query with lower consistency level.

This policy should only be used when tradeoff of writing data to the
This policy should only be used when trade-off of writing data to the
smaller amount of nodes is acceptable. Also, that sometimes data won't
be even possible to read that way, because tradeoff was made and
be even possible to read that way, because trade-off was made and
guarantees have changed. Reads with lower consistency level may
increase chance of reading stale data.

Expand Down
4 changes: 2 additions & 2 deletions articles/cassandra_concepts.md
Expand Up @@ -90,7 +90,7 @@ evenly distributed between them.

This can be split into two parts: _Service Discovery_ and _Failure
Detection_. Service discovery comes into play when you set up a
fresh node, add it to the cluster. data gets replicated to that
fresh node, add it to the cluster. Data gets replicated to that
node and it starts receiving requests. When the node is was taken
down for maintenance, or fails due to an error, this should be
detected as quickly as possible by other members of the cluster.
Expand All @@ -110,7 +110,7 @@ If you're familiar with Cassandra, you may want to skip this section.
__Keyspace__ is what's usually called database in relational
databases, it holds column families, sets of key-value pairs. __Column
family__ is somewhat close to the table concept from relational
DBs. There're no relations between column families in Cassandra, even
DBs. There are no relations between column families in Cassandra, even
though it is possible to use foreign keys, _there will be no
referencial integrity checks performed_.

Expand Down
6 changes: 3 additions & 3 deletions articles/cql.md
Expand Up @@ -12,7 +12,7 @@ This guide covers most CQL operations, such as
* Ordering
* Collection Types
* Counter Columns
* Timestampls and TTL
* Timestamps and TTL
* Prepared Statements
* Range queries
* Pagination
Expand Down Expand Up @@ -289,7 +289,7 @@ Available consistency levels are:
Please refer to [Cassandra documentation on consistency levels](http://www.datastax.com/documentation/cassandra/2.1/cassandra/dml/dml_config_consistency_c.html)
for more info.

Following operation will be performed with consistenct level of `:quorum`:
The following operation will be performed with consistency level of `:quorum`:

``` clojure
(ns cassaforte.docs
Expand Down Expand Up @@ -808,7 +808,7 @@ column provides an efficient way to count or sum integer values. It is
achieved by using atomic increment/decrement operations on column values.

Counter is a special column type, whose value is a 64-bit (signed)
interger. On write, new value is added (or substracted) to previous
integer. On write, new value is added (or subtracted) to previous
counter value. It should be noted that usual consistency/availability
tradeoffs apply to counter operations. In order to perform a counter
update, Cassandra has to perform a read before write in a background,
Expand Down
2 changes: 1 addition & 1 deletion articles/guides.md
Expand Up @@ -25,7 +25,7 @@ Covers CQL operations besides schema manipulation:
* Ordering
* Collection Types
* Counter Columns
* Timestampls and TTL
* Timestamps and TTL
* Prepared Statements
* Range queries
* Pagination
Expand Down