Skip to content

Commit

Permalink
Minor typo fixes
Browse files Browse the repository at this point in the history
fixes bug 1056303
fixes bug 1056848
fixes bug 1056846

As in the bug reports - just a few minor typos that this patches.

Change-Id: I496c7b01c758c2e14069e5f7232ca8a27d3adeca
  • Loading branch information
fifieldt committed Sep 29, 2012
1 parent 41931d8 commit 19708d9
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ format="PNG" />
 </para>
</listitem>
</itemizedlist>Quantum relies on the OpenStack
Identify Project (Keystone) for authentication and
Identity Project (Keystone) for authentication and
authorization of all API request.    </para>
<para>OpenStack Nova interacts with Quantum through calls
to its standard API.  As part of creating a VM,
Expand All @@ -294,7 +294,7 @@ format="PNG" />
<para>Like other OpenStack services, Quantum provides cloud administrators with
significant flexibility in deciding where to run individual services.  One one
extreme, all services including Nova, Quantum, Keystone, and so on can be run on a
single physical hosts for evaluation purposes.  On the other, each service could
single physical host for evaluation purposes.  On the other, each service could
have its own physical hosts, and some cases be replicated across multiple hosts for
redundancy. See <xref linkend="ch_high_avail"/>  </para>
<para>In this guide, we focus primarily on a standard
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1715,7 +1715,7 @@ net.ipv4.netfilter.ip_conntrack_max = 262144
<para>Replication uses a push model, with records and files generally only being copied from local to remote replicas. This is important because data on the node may not belong there (as in the case of handoffs and ring changes), and a replicator can't know what data exists elsewhere in the cluster that it should pull in. It's the duty of any node that contains data to ensure that data gets to where it belongs. Replica placement is handled by the ring.</para>

<para>Every deleted record or file in the system is marked by a tombstone, so that deletions can be replicated alongside creations. These tombstones are cleaned up by the replication process after a period of time referred to as the consistency window, which is related to replication duration and how long transient failures can remove a node from the cluster. Tombstone cleanup must be tied to replication to reach replica convergence.</para>
<para>If a replicator detects that a remote drive is has failed, it will use the ring's &#8220;get_more_nodes&#8221; interface to choose an alternate node to synchronize with. The replicator can generally maintain desired levels of replication in the face of hardware failures, though some replicas may not be in an immediately usable location.</para>
<para>If a replicator detects that a remote drive has failed, it will use the ring's &#8220;get_more_nodes&#8221; interface to choose an alternate node to synchronize with. The replicator can generally maintain desired levels of replication in the face of hardware failures, though some replicas may not be in an immediately usable location.</para>
<para>Replication is an area of active development, and likely rife with potential improvements to speed and correctness.</para>
<para>There are two major classes of replicator - the db replicator, which replicates accounts and containers, and the object replicator, which replicates object data.</para>
<section xml:id="database-replication">
Expand Down

0 comments on commit 19708d9

Please sign in to comment.