Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc/rados: edit placement-groups.rst (1 of x) #51975

Conversation

zdover23
Copy link
Contributor

@zdover23 zdover23 commented Jun 8, 2023

Edit doc/rados/operations/placement-groups.rst.

https://tracker.ceph.com/issues/58485

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

Copy link
Contributor

@anthonyeleven anthonyeleven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Various suggestions

Placement groups (PGs) are an internal implementation detail of how Ceph
distributes data. Autoscaling provides a way to manage PGs, and especially to
manage the number of PGs present in different pools. When *pg-autoscaling* is
enabled, the cluster is allowed to make recommendations and automatic
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/and/or/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted.


.. prompt:: bash #

ceph osd pool set foo pg_autoscale_mode on

You can also configure the default ``pg_autoscale_mode`` that is
set on any pools that are subsequently created:
There is also a default ``pg_autoscale_mode`` setting for any pools that are
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/default//

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted.

You can also configure the default ``pg_autoscale_mode`` that is
set on any pools that are subsequently created:
There is also a default ``pg_autoscale_mode`` setting for any pools that are
created after the initial setup of the cluster. To configure this setting, run
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/configure/change/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted.

run the below if the ``.mgr`` pool should be constrained to ``ssd`` devices:
If the ``ceph osd pool autoscale-status`` command returns no output at all,
there is probably at least one pool that spans multiple CRUSH roots. This
'spanning pool' issue can happen because of scenarios like the following:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/because of/in/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted.

This will result in a small amount of backfill traffic that should complete
quickly.
This intervention will result in a small amount of backfill traffic, but
typically this traffic completes quickly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/traffic//

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted.

those adjustments are made.
When a cluster or pool is first created, it consumes only a small fraction of
the total cluster capacity and appears to the system as if it should need only
a small number of PGs. However, in most cases, cluster administrators know
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/most/some/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted.

as specified by the administrator) to the expected storage of all other pools
that have target ratios set. If both ``target_size_bytes`` and
``target_size_ratio`` are specified, then ``target_size_ratio`` takes
precedence.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anthonyeleven - There were a few sentences we were a little uncertain about. This ("If both...") was one. It originally read: "If both target size bytes and ratio are specified, the ratio takes precedence." Does this revision look okay to you? Thanks for your help.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly the autoscaler is a mystery to me so I have no idea :-/

size set.

#. Normalizing the target ratios among pools that have target ratio set so
that collectively they target the other pools. For example, four pools
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anthonyeleven - There were a few sentences we were a little uncertain about. This ("Normalizing...") was one. It originally read: "Normalizing the target ratios among pools with target ratio set so they collectively target the rest of the space." Does this revision look okay to you? Thanks for your help.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think maybe "collectively target cluster capacity"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted with minor modification.

pool is working towards.

- **NEW PG_NUM** (if present) is the value that the system is recommending the
``pg_num`` of the pool to be changed to. It is always a power of 2, and it is
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anthonyeleven - There were a few sentences we were a little uncertain about. This ("NEW PG_NUM (if present)...") was one. It originally read: "NEW PG_NUM, if present, is what the system believes the pool's pg_num should be changed to." Does this revision look okay to you? Thanks for your help.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is basically just a formatting improvement? LGTM. I think the if present reflects changes in behavior between releases -- there are more columns now than when the autoscaler first was introduced.

Edit doc/rados/operations/placement-groups.rst.

https://tracker.ceph.com/issues/58485

Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Co-authored-by: Cole Mitchell <cole.mitchell.ceph@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
@zdover23 zdover23 force-pushed the wip-doc-2023-06-09-rados-operations-placement-groups-1-of-x branch from 9eb6d1d to d6e1116 Compare June 9, 2023 13:13
@zdover23 zdover23 merged commit 1e3835a into ceph:main Jun 9, 2023
11 checks passed
@zdover23
Copy link
Contributor Author

zdover23 commented Jun 9, 2023

#51984 - Reef backport
#51985 - Quincy backport

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants