Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pacific: mon/OSDMonitor: Ensure kvmon() is writeable before handling "osd new" cmd #46691

Merged
merged 1 commit into from Jun 17, 2022

Conversation

sseshasa
Copy link
Contributor

backport tracker: https://tracker.ceph.com/issues/56059


backport of #46428
parent tracker: https://tracker.ceph.com/issues/55773

this backport was staged using ceph-backport.sh version 16.0.0.6848
find the latest version at https://github.com/ceph/ceph/blob/main/src/script/ceph-backport.sh

… cmd

Before proceeding to handle "osd new" mon command as part of
OSDMonitor::prepare_command_impl(), a check is made to verify if the
authmon is writeable. Later on, prepare_command_osd_new() invokes
KVMonitor::do_osd_new() to create pending dmcrypt keys and calls
propose_pending(). The propose could fail (with an assertion failure)
if there was a prior mon command that resulted in the kvmon invoking
propose_pending().

In order to avoid such a situation, introduce a check to verify that
kvmon is also writeable in OSDMonitor::prepare_command_impl(). If it
is not writeable, the op is pushed into the wait_for_active context
queue to be retried later.

Fixes: https://tracker.ceph.com/issues/55773
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
(cherry picked from commit 9a0d42c)
@sseshasa sseshasa requested a review from a team as a code owner June 15, 2022 09:10
@sseshasa sseshasa added this to the pacific milestone Jun 15, 2022
@sseshasa sseshasa added the core label Jun 15, 2022
@github-actions github-actions bot added the mon label Jun 15, 2022
@sseshasa
Copy link
Contributor Author

jenkins test make check

@sseshasa
Copy link
Contributor Author

Teuthology Test Result
https://pulpito.ceph.com/yuriw-2022-06-15_18:29:33-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/
http://pulpito.front.sepia.ceph.com/yuriw-2022-06-16_16:03:11-rados-wip-yuri4-testing-2022-06-15-1000-pacific-distro-default-smithi/

There were no related failures across the two teuthology runs.

Unrelated Failures
The following known issues were hit across the two teuthology runs mentioned above.

  1. https://tracker.ceph.com/issues/56072: New tracker unrelated to this PR
  2. https://tracker.ceph.com/issues/55322: Failure in /qa/workunits/rest/test-restful.sh
  3. https://tracker.ceph.com/issues/55741: AssertionError: Timed out retrying after 120000ms: Expected to find element: cd-modal .custom-control-label, but never found it.
  4. https://tracker.ceph.com/issues/55446: mgr-nfs-ugrade and mds_upgrade_sequence tests fail on 'ceph versions | jq -e' command
  5. https://tracker.ceph.com/issues/52124 and https://tracker.ceph.com/issues/54603: Valgrind errors
  6. https://tracker.ceph.com/issues/52420 and https://tracker.ceph.com/issues/52321: 'wait for operator' reached maximum tries (90) after waiting for 900 seconds
  7. https://tracker.ceph.com/issues/54071: rados/cephadm/osds: Invalid command: missing required parameter hostname()
  8. https://tracker.ceph.com/issues/55443: "SELinux denials found.." in rados run
  9. https://tracker.ceph.com/issues/49777: test_pool_min_size: 'check for active or peered' reached maximum tries (5) after waiting for 25 seconds
  10. https://tracker.ceph.com/issues/55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
  11. https://tracker.ceph.com/issues/51835 - Mgr assertion failure: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch)
  12. https://tracker.ceph.com/issues/52321 - qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting for 900 seconds

@neha-ojha neha-ojha merged commit 95206ce into ceph:pacific Jun 17, 2022
8 checks passed
@sseshasa sseshasa deleted the wip-56059-pacific branch June 23, 2022 06:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants