Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reef: mon: Fix ceph versions command #51788

Merged
merged 1 commit into from May 30, 2023
Merged

reef: mon: Fix ceph versions command #51788

merged 1 commit into from May 30, 2023

Conversation

pdvian
Copy link

@pdvian pdvian commented May 26, 2023

The commit-id d3cca1d has introduced a bug where mgr/osd/mds version information goes missing during the cluster upgrade. Collect version information before checking the emptiness of the map.

Fixes: https://tracker.ceph.com/issues/61453

Signed-off-by: Prashant D pdhange@redhat.com
(cherry picked from commit 3fbebe3)

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

The commit-id d3cca1d has introduced a bug where mgr/osd/mds
version information goes missing during the cluster upgrade.
Collect version information before checking the emptiness
of the map.

Fixes: https://tracker.ceph.com/issues/61453

Signed-off-by: Prashant D <pdhange@redhat.com>
(cherry picked from commit 3fbebe3)
@pdvian pdvian requested a review from a team as a code owner May 26, 2023 20:16
@github-actions github-actions bot added this to the reef milestone May 26, 2023
@pdvian pdvian requested a review from neha-ojha May 26, 2023 20:18
@pdvian
Copy link
Author

pdvian commented May 30, 2023

https://pulpito.ceph.com/?branch=wip-yuri3-testing-2023-05-26-1329-reef

Failures, unrelated:

  1. https://tracker.ceph.com/issues/57755
  2. https://tracker.ceph.com/issues/58946
  3. https://tracker.ceph.com/issues/61225
  4. https://tracker.ceph.com/issues/57754
  5. https://tracker.ceph.com/issues/59196

Details:

  1. task/test_orch_cli: test_cephfs_mirror times out - Ceph - Orchestrator
  2. cephadm: KeyError: 'osdspec_affinity' - Ceph - Mgr - Dashboard
  3. TestClsRbd.mirror_snapshot failure - Ceph - RBD
  4. test_envlibrados_for_rocksdb.sh: update-alternatives: error: alternative path /usr/bin/gcc-11 doesn't exist - Ceph - RADOS
  5. ceph_test_lazy_omap_stats segfault while waiting for active+clean - Ceph - RADOS

Remaining failures are due to :

  • apt-get could not get lock /var/lib/apt/lists/lock
  • The repository 'https://apt.kubernetes.io kubernetes-xenial InRelease' is not signed

@rzarzynski
Copy link
Contributor

This is a backport of #51765.

@yuriw yuriw merged commit 5a28d04 into ceph:reef May 30, 2023
14 of 15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants