New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc/rados: edit pools.rst (1 of x) #51908
doc/rados: edit pools.rst (1 of x) #51908
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-blocking approval but please consider the suggestions
doc/rados/operations/pools.rst
Outdated
is: ``size = 3``), but you can configure the number of replicas on a per-pool | ||
basis. For `erasure-coded pools <../erasure-code>`_, resilience is defined as | ||
the number of coding chunks (for example, ``m = 2`` in the default **erasure | ||
code profile**). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
size=4 is recommended for CephFS metadata pools but this might not be the place to insert that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"this resilience is .."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the rare suggestion that I'm going to reject. I think "resilience" is clearer than "this resilience".
doc/rados/operations/pools.rst
Outdated
number of OSDs that can fail without data loss is equal to the number of | ||
replicas. | ||
|
||
For example: a typical configuration stores an object and two replicas (that |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like to describe the three copies as more identical, rather than replicas sounding like something different from the primary. That lets us use "replicas" below with a different connotation. So here I suggest
For example: a typical configuration stores three replicas (copies) of each RADOS object (that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted.
doc/rados/operations/pools.rst
Outdated
|
||
- **Placement Groups**: You can set the number of placement groups (PGs) for | ||
the pool. In a typical configuration, the target number of PGs is | ||
approximately one hundred PGs per OSD. This provides optimal balancing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a typical configuration, one targets one hundred PG replicas on each OSD.
Mind you, I've always thought that this retconning of the target from 200 to 100 was a mistake, but that's a larger issue for another day.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/optimal/reasonable/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted.
|
||
Pool Names | ||
========== | ||
|
||
Pool names beginning with ``.`` are reserved for use by Ceph's internal | ||
operations. Please do not create or manipulate pools with these names. | ||
operations. Do not create or manipulate pools with these names. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, Rook has a way of creating pools such that .mgr
breaks the pg autoscaler completely, but probably not the place to mention that.
doc/rados/operations/pools.rst
Outdated
.. note:: Starting with Luminous, all pools need to be associated to the | ||
application using the pool. See `Associate Pool to Application`_ below for | ||
more information. | ||
.. note:: In Luminous and later releases, each pool must be associated to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"associated with" or "tagged with"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted "associated with". Good catch.
splitting would happen at the pool creation time, to avoid the latency | ||
impact to do a runtime folder splitting. | ||
The expected number of objects for this pool. By setting this value and | ||
assigning a negative value to **filestore merge threshold**, you arrange |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Filestore is an unbackend and should be removed from mention here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Parsing this comment was like reading Finnegans Wake, but I finally cracked the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going into Unfinished Business until I can find a suitable string to replace "filestore merge threshold".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Paraphrasing 1984, unperson etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, yeah. I got it. I had to think of Stalin to get it. Then I got it.
doc/rados/operations/pools.rst
Outdated
the cluster as a whole. Each PG belongs to a specific pool: when multiple | ||
pools use the same OSDs, make sure that the **sum** of PG replicas per OSD is | ||
in the desired PG-per-OSD target range. To calculate an appropriate number of | ||
PGs for your pool, use the `pgcalc`_ tool. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/pool/pools/
d987f3d
to
d26cfef
Compare
Edit doc/rados/operations/pools.rst. https://tracker.ceph.com/issues/58485 Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com> Signed-off-by: Zac Dover <zac.dover@proton.me>
d26cfef
to
ebaebff
Compare
Edit doc/rados/operations/pools.rst.
https://tracker.ceph.com/issues/58485
Contribution Guidelines
To sign and title your commits, please refer to Submitting Patches to Ceph.
If you are submitting a fix for a stable branch (e.g. "pacific"), please refer to Submitting Patches to Ceph - Backports for the proper workflow.
Checklist
Show available Jenkins commands
jenkins retest this please
jenkins test classic perf
jenkins test crimson perf
jenkins test signed
jenkins test make check
jenkins test make check arm64
jenkins test submodules
jenkins test dashboard
jenkins test dashboard cephadm
jenkins test api
jenkins test docs
jenkins render docs
jenkins test ceph-volume all
jenkins test ceph-volume tox
jenkins test windows