Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

src: rename ReplicatedPG to PrimaryLogPG #12487

Merged
merged 1 commit into from Dec 14, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 3 additions & 3 deletions doc/dev/object-store.rst
Expand Up @@ -49,12 +49,12 @@

"OSDMapTool" -> "OSDMap"

"PG" -> "ReplicatedPG"
"PG" -> "PrimaryLogPG"
"PG" -> "ObjectStore"
"PG" -> "OSDMap"

"ReplicatedPG" -> "ObjectStore"
"ReplicatedPG" -> "OSDMap"
"PrimaryLogPG" -> "ObjectStore"
"PrimaryLogPG" -> "OSDMap"

"ObjectStore" -> "FileStore"

Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/erasure_coding/ecbackend.rst
Expand Up @@ -193,7 +193,7 @@ Pipeline
Reading src/osd/ExtentCache.h should have given a good idea of how
operations might overlap. There are several states involved in
processing a write operation and an important invariant which
isn't enforced by ReplicatedPG at a higher level which need to be
isn't enforced by PrimaryLogPG at a higher level which need to be
managed by ECBackend. The important invariant is that we can't
have uncacheable and rmw operations running at the same time
on the same object. For simplicity, we simply enforce that any
Expand Down
15 changes: 4 additions & 11 deletions doc/dev/osd_internals/log_based_pg.rst
Expand Up @@ -5,19 +5,12 @@ Log Based PG
Background
==========

Why ReplicatedPG?
Why PrimaryLogPG?
-----------------

Currently, consistency for all ceph pool types is ensured by primary
log-based replication. This goes for both erasure-coded and
replicated pools. If you ever find yourself asking "why is it that
both replicated and erasure coded pools are implemented by
ReplicatedPG.h/cc", that's why (though it definitely should be called
LogBasedPG, should include peering, and PG should be an abstract
interface defining only those things the OSD needs to know to route
messages etc -- but we live in an imperfect world where git deals
imperfectly with cherry-picking between branches where the file has
different names).
replicated pools.

Primary log-based replication
-----------------------------
Expand Down Expand Up @@ -55,7 +48,7 @@ newer cannot have completed without that log containing it) and the
newest head remembered (clearly, all writes in the log were started,
so it's fine for us to remember them) as the new head. This is the
main point of divergence between replicated pools and ec pools in
PG/ReplicatedPG: replicated pools try to choose the newest valid
PG/PrimaryLogPG: replicated pools try to choose the newest valid
option to avoid the client needing to replay those operations and
instead recover the other copies. EC pools instead try to choose
the *oldest* option available to them.
Expand Down Expand Up @@ -85,7 +78,7 @@ PGBackend
So, the fundamental difference between replication and erasure coding
is that replication can do destructive updates while erasure coding
cannot. It would be really annoying if we needed to have two entire
implementations of ReplicatedPG, one for each of the two, if there are
implementations of PrimaryLogPG, one for each of the two, if there are
really only a few fundamental differences:

1. How reads work -- async only, requires remote reads for ec
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/map_message_handling.rst
Expand Up @@ -97,7 +97,7 @@ If these conditions are not met, the op is either discarded or queued for later
CEPH_MSG_OSD_OP processing
--------------------------

ReplicatedPG::do_op handles CEPH_MSG_OSD_OP op and will queue it
PrimaryLogPG::do_op handles CEPH_MSG_OSD_OP op and will queue it

1. in wait_for_all_missing if it is a CEPH_OSD_OP_PGLS for a designated snapid and some object updates are still missing
2. in waiting_for_active if the op may write but the scrubber is working
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/osd_overview.rst
Expand Up @@ -81,7 +81,7 @@ Concepts
See MapHandling

*PG*
See src/osd/PG.* src/osd/ReplicatedPG.*
See src/osd/PG.* src/osd/PrimaryLogPG.*

Objects in rados are hashed into *PGs* and *PGs* are placed via crush onto
OSDs. The PG structure is responsible for handling requests pertaining to
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/osd_internals/snaps.rst
Expand Up @@ -65,7 +65,7 @@ order to determine which clones might need to be removed upon snap
removal, we maintain a mapping from snap to *hobject_t* using the
*SnapMapper*.

See ReplicatedPG::SnapTrimmer, SnapMapper
See PrimaryLogPG::SnapTrimmer, SnapMapper

This trimming is performed asynchronously by the snap_trim_wq while the
pg is clean and not scrubbing.
Expand Down
6 changes: 3 additions & 3 deletions doc/dev/osd_internals/watch_notify.rst
Expand Up @@ -32,7 +32,7 @@ Notify to the Watch and either:
* if the Watch is *connected*, sends a Notify message to the client
* if the Watch is *unconnected*, does nothing.

When the Watch becomes connected (in ReplicatedPG::do_osd_op_effects),
When the Watch becomes connected (in PrimaryLogPG::do_osd_op_effects),
Notifies are resent to all remaining tracked Notify objects.

Each Notify object tracks the set of un-notified Watchers via
Expand All @@ -53,8 +53,8 @@ A watch may be in one of 5 states:
Case 2 occurs between when an OSD goes active and the ObjectContext
for an object with watchers is loaded into memory due to an access.
During Case 2, no state is registered for the watch. Case 2
transitions to Case 4 in ReplicatedPG::populate_obc_watchers() during
ReplicatedPG::find_object_context. Case 1 becomes case 3 via
transitions to Case 4 in PrimaryLogPG::populate_obc_watchers() during
PrimaryLogPG::find_object_context. Case 1 becomes case 3 via
OSD::do_osd_op_effects due to a watch operation. Case 4,5 become case
3 in the same way. Case 3 becomes case 4 when the connection resets
on a watcher's session.
Expand Down
2 changes: 1 addition & 1 deletion doc/dev/versions.rst
Expand Up @@ -10,7 +10,7 @@ to the pg log as transactions, and is incremented in all the places it
used to be. The user_version is modified by manipulating the new
OpContext::user_at_version and is also persisted via the pg log
transactions.
user_at_version is modified only in ReplicatedPG::prepare_transaction
user_at_version is modified only in PrimaryLogPG::prepare_transaction
when the op was a "user modify" (a non-watch write), and the durable
user_version is updated according to the following rules:
1) set user_at_version to the maximum of ctx->new_obs.oi.user_version+1
Expand Down