Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc/rados: edit pools.rst (1 of x) #51908

Conversation

zdover23
Copy link
Contributor

@zdover23 zdover23 commented Jun 4, 2023

Edit doc/rados/operations/pools.rst.

https://tracker.ceph.com/issues/58485

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

Copy link
Contributor

@anthonyeleven anthonyeleven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking approval but please consider the suggestions

is: ``size = 3``), but you can configure the number of replicas on a per-pool
basis. For `erasure-coded pools <../erasure-code>`_, resilience is defined as
the number of coding chunks (for example, ``m = 2`` in the default **erasure
code profile**).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

size=4 is recommended for CephFS metadata pools but this might not be the place to insert that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"this resilience is .."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the rare suggestion that I'm going to reject. I think "resilience" is clearer than "this resilience".

number of OSDs that can fail without data loss is equal to the number of
replicas.

For example: a typical configuration stores an object and two replicas (that
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like to describe the three copies as more identical, rather than replicas sounding like something different from the primary. That lets us use "replicas" below with a different connotation. So here I suggest

For example: a typical configuration stores three replicas (copies) of each RADOS object (that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted.


- **Placement Groups**: You can set the number of placement groups (PGs) for
the pool. In a typical configuration, the target number of PGs is
approximately one hundred PGs per OSD. This provides optimal balancing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a typical configuration, one targets one hundred PG replicas on each OSD.

Mind you, I've always thought that this retconning of the target from 200 to 100 was a mistake, but that's a larger issue for another day.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/optimal/reasonable/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted.


Pool Names
==========

Pool names beginning with ``.`` are reserved for use by Ceph's internal
operations. Please do not create or manipulate pools with these names.
operations. Do not create or manipulate pools with these names.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, Rook has a way of creating pools such that .mgr breaks the pg autoscaler completely, but probably not the place to mention that.

.. note:: Starting with Luminous, all pools need to be associated to the
application using the pool. See `Associate Pool to Application`_ below for
more information.
.. note:: In Luminous and later releases, each pool must be associated to the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"associated with" or "tagged with"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted "associated with". Good catch.

doc/rados/operations/pools.rst Show resolved Hide resolved
doc/rados/operations/pools.rst Outdated Show resolved Hide resolved
doc/rados/operations/pools.rst Outdated Show resolved Hide resolved
splitting would happen at the pool creation time, to avoid the latency
impact to do a runtime folder splitting.
The expected number of objects for this pool. By setting this value and
assigning a negative value to **filestore merge threshold**, you arrange
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filestore is an unbackend and should be removed from mention here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parsing this comment was like reading Finnegans Wake, but I finally cracked the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going into Unfinished Business until I can find a suitable string to replace "filestore merge threshold".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Paraphrasing 1984, unperson etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, yeah. I got it. I had to think of Stalin to get it. Then I got it.

the cluster as a whole. Each PG belongs to a specific pool: when multiple
pools use the same OSDs, make sure that the **sum** of PG replicas per OSD is
in the desired PG-per-OSD target range. To calculate an appropriate number of
PGs for your pool, use the `pgcalc`_ tool.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/pool/pools/

@zdover23 zdover23 force-pushed the wip-doc-2023-06-04-rados-operations-pools-1-of-x branch from d987f3d to d26cfef Compare June 4, 2023 23:56
Edit doc/rados/operations/pools.rst.

https://tracker.ceph.com/issues/58485

Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
@zdover23 zdover23 force-pushed the wip-doc-2023-06-04-rados-operations-pools-1-of-x branch from d26cfef to ebaebff Compare June 5, 2023 00:04
@zdover23 zdover23 merged commit a7be364 into ceph:main Jun 5, 2023
11 checks passed
@zdover23
Copy link
Contributor Author

zdover23 commented Jun 6, 2023

#51912 - Reef backport
#51913 - Quincy backport

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants