New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc/rados: edit placement-groups.rst (1 of x) #51975
doc/rados: edit placement-groups.rst (1 of x) #51975
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Various suggestions
Placement groups (PGs) are an internal implementation detail of how Ceph | ||
distributes data. Autoscaling provides a way to manage PGs, and especially to | ||
manage the number of PGs present in different pools. When *pg-autoscaling* is | ||
enabled, the cluster is allowed to make recommendations and automatic |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/and/or/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted.
|
||
.. prompt:: bash # | ||
|
||
ceph osd pool set foo pg_autoscale_mode on | ||
|
||
You can also configure the default ``pg_autoscale_mode`` that is | ||
set on any pools that are subsequently created: | ||
There is also a default ``pg_autoscale_mode`` setting for any pools that are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/default//
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted.
You can also configure the default ``pg_autoscale_mode`` that is | ||
set on any pools that are subsequently created: | ||
There is also a default ``pg_autoscale_mode`` setting for any pools that are | ||
created after the initial setup of the cluster. To configure this setting, run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/configure/change/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted.
run the below if the ``.mgr`` pool should be constrained to ``ssd`` devices: | ||
If the ``ceph osd pool autoscale-status`` command returns no output at all, | ||
there is probably at least one pool that spans multiple CRUSH roots. This | ||
'spanning pool' issue can happen because of scenarios like the following: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/because of/in/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted.
This will result in a small amount of backfill traffic that should complete | ||
quickly. | ||
This intervention will result in a small amount of backfill traffic, but | ||
typically this traffic completes quickly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/traffic//
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted.
those adjustments are made. | ||
When a cluster or pool is first created, it consumes only a small fraction of | ||
the total cluster capacity and appears to the system as if it should need only | ||
a small number of PGs. However, in most cases, cluster administrators know |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/most/some/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted.
as specified by the administrator) to the expected storage of all other pools | ||
that have target ratios set. If both ``target_size_bytes`` and | ||
``target_size_ratio`` are specified, then ``target_size_ratio`` takes | ||
precedence. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anthonyeleven - There were a few sentences we were a little uncertain about. This ("If both...") was one. It originally read: "If both target size bytes and ratio are specified, the ratio takes precedence." Does this revision look okay to you? Thanks for your help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly the autoscaler is a mystery to me so I have no idea :-/
size set. | ||
|
||
#. Normalizing the target ratios among pools that have target ratio set so | ||
that collectively they target the other pools. For example, four pools |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anthonyeleven - There were a few sentences we were a little uncertain about. This ("Normalizing...") was one. It originally read: "Normalizing the target ratios among pools with target ratio set so they collectively target the rest of the space." Does this revision look okay to you? Thanks for your help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maybe "collectively target cluster capacity"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accepted with minor modification.
pool is working towards. | ||
|
||
- **NEW PG_NUM** (if present) is the value that the system is recommending the | ||
``pg_num`` of the pool to be changed to. It is always a power of 2, and it is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anthonyeleven - There were a few sentences we were a little uncertain about. This ("NEW PG_NUM (if present)...") was one. It originally read: "NEW PG_NUM, if present, is what the system believes the pool's pg_num
should be changed to." Does this revision look okay to you? Thanks for your help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is basically just a formatting improvement? LGTM. I think the if present
reflects changes in behavior between releases -- there are more columns now than when the autoscaler first was introduced.
Edit doc/rados/operations/placement-groups.rst. https://tracker.ceph.com/issues/58485 Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com> Co-authored-by: Cole Mitchell <cole.mitchell.ceph@gmail.com> Signed-off-by: Zac Dover <zac.dover@proton.me>
9eb6d1d
to
d6e1116
Compare
Edit doc/rados/operations/placement-groups.rst.
https://tracker.ceph.com/issues/58485
Contribution Guidelines
To sign and title your commits, please refer to Submitting Patches to Ceph.
If you are submitting a fix for a stable branch (e.g. "pacific"), please refer to Submitting Patches to Ceph - Backports for the proper workflow.
Checklist
Show available Jenkins commands
jenkins retest this please
jenkins test classic perf
jenkins test crimson perf
jenkins test signed
jenkins test make check
jenkins test make check arm64
jenkins test submodules
jenkins test dashboard
jenkins test dashboard cephadm
jenkins test api
jenkins test docs
jenkins render docs
jenkins test ceph-volume all
jenkins test ceph-volume tox
jenkins test windows