New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ceph: raise mon_max_pg_per_osd from 300 to 600 #1047
Conversation
Allow for creating more pools. In e529256, we raised the mon_max_pg_per_osd to 300 to prevent pool creations from failing. It turns out that this wasn't quite enough. Raising it to 600 now after extensive discussions to allow for a few more rbd pools to be created. Here is some explanation of the backgrounds: When pools are created with target_size_ratio, they will possibly be created with more pools initially, so that the PG count per OSD roughly matches target size ratio percentage of the mon_target_pg_per_osd which defaults to 100. So the default pools that ocs-operator creates with a target_size ratio of .49 will get 128 PGs, resulting in some 43 PGs per OSD (replica 3). If additional pools are created later, if they are also created with a target_size_ratio, they might get more than the 32 default PGs as well. The pg auto scaler can scale the pgs of the pools down later to get closer to its target_pg_per_osd count. But for a certain period of time, we need more headroom to be able to create additional rbd pools. Raising to 600 to be on the safer side. Signed-off-by: Michael Adam <obnox@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jarrpa The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherrypick release-4.7 |
@jarrpa: once the present PR merges, I will cherry-pick it on top of release-4.7 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-4.6 |
@jarrpa: once the present PR merges, I will cherry-pick it on top of release-4.6 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-4.7 |
@jarrpa: once the present PR merges, I will cherry-pick it on top of release-4.7 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test ocs-operator-bundle-e2e-aws |
/retest |
Timeout in setting up the test env. Does not seem to be related to the patch. /test ocs-operator-bundle-e2e-aws |
/test ocs-operator-bundle-e2e-aws |
/override ci/prow/red-hat-storage-ocs-ci-e2e-aws |
@obnoxxx: Overrode contexts on behalf of obnoxxx: ci/prow/red-hat-storage-ocs-ci-e2e-aws In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
/test ocs-operator-bundle-e2e-aws |
/retest Please review the full test history for this PR and help us cut down flakes. |
/override ci/prow/ocs-operator-bundle-e2e-aws |
@jarrpa: Overrode contexts on behalf of jarrpa: ci/prow/ocs-operator-bundle-e2e-aws In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-4.6 |
@jarrpa: once the present PR merges, I will cherry-pick it on top of release-4.6 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherrypick release-4.7 |
@jarrpa: once the present PR merges, I will cherry-pick it on top of release-4.7 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@jarrpa: new pull request created: #1051 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@jarrpa: new pull request created: #1052 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@obnoxxx: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Allow for creating more pools.
In e529256, we raised
the mon_max_pg_per_osd to 300 to prevent pool creations
from failing. It turns out that this wasn't quite enough.
Raising it to 600 now to allow for a few more rbd pools
to be created.
Signed-off-by: Michael Adam obnox@redhat.com