Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pacific: mds: clear MDCache::rejoin_*_q queues before recovering file inodes #46682

Merged
merged 1 commit into from Jun 28, 2022

Conversation

lxbsz
Copy link
Member

@lxbsz lxbsz commented Jun 15, 2022

backport tracker: https://tracker.ceph.com/issues/56016


backport of #44655
parent tracker: https://tracker.ceph.com/issues/53741

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

…_recover()

If the monitor sends rejoin mdsmap twice just before the first time
hasn't finished yet, it may run identify_files_to_recover() twice.
Since the rejoin_recover_q and rejoin_check_q were vector so there
could be duplicated inodes.

Fixes: https://tracker.ceph.com/issues/53741
Signed-off-by: Xiubo Li <xiubli@redhat.com>
(cherry picked from commit d82bdd8)
@lxbsz lxbsz added the needs-qa label Jun 15, 2022
@lxbsz lxbsz requested a review from a team June 15, 2022 02:00
@github-actions github-actions bot added the cephfs Ceph File System label Jun 15, 2022
@github-actions github-actions bot added this to the pacific milestone Jun 15, 2022
@ljflores
Copy link
Contributor

Rados suite results: https://pulpito.ceph.com/?branch=wip-yuri3-testing-2022-06-22-1121-pacific

Failures, unrelated:
1. https://tracker.ceph.com/issues/48029
2. https://tracker.ceph.com/issues/53855
3. https://tracker.ceph.com/issues/54992
4. https://tracker.ceph.com/issues/53939
5. https://tracker.ceph.com/issues/52124
6. https://tracker.ceph.com/issues/54071
7. https://tracker.ceph.com/issues/51835 -- pending Pacific backport
8. https://tracker.ceph.com/issues/53501
9. https://tracker.ceph.com/issues/55741 -- pending Pacific backport
10. https://tracker.ceph.com/issues/51904

Details:
1. Exiting scrub checking -- not all pgs scrubbed. - Ceph - RADOS
2. rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount - Ceph - RADOS
3. cannot stat '/etc/containers/registries.conf': No such file or directory - Ceph - RADOS
4. ceph-nfs-upgrade, pacific: Upgrade Paused due to UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.0 on host smithi103 failed - Ceph - Orchestrator
5. Invalid read of size 8 in handle_recovery_delete() - Ceph - RADOS
6. rados/cephadm/osds: Invalid command: missing required parameter hostname() - Ceph - Orchestrator
7. mgr/DaemonServer.cc: FAILED ceph_assert(pending_service_map.epoch > service_map.epoch) - Ceph - Mgr
8. Exception when running 'rook' task. - Ceph - Orchestrator
9. cephadm/test_dashboard_e2e.sh: Unable to find element cd-modal .custom-control-label when testing on orchestrator/01-hosts.e2e-spec.ts - Ceph - Mgr - Dashboard
10, AssertionError: wait_for_clean: failed before timeout expired due to down PGs - Ceph - RADOS

@ljflores
Copy link
Contributor

jenkins test api

@yuriw yuriw merged commit 678ed0d into ceph:pacific Jun 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants