Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qa: upgrade only from N-2, N-1 releases #53734

Merged
merged 3 commits into from
Jan 29, 2024
Merged

Conversation

dparmar18
Copy link
Contributor

@dparmar18 dparmar18 commented Sep 29, 2023

Contribution Guidelines

Checklist

  • Tracker (select at least one)
    • References tracker ticket
    • Very recent bug; references commit where it was introduced
    • New feature (ticket optional)
    • Doc update (no ticket needed)
    • Code cleanup (no ticket needed)
  • Component impact
    • Affects Dashboard, opened tracker ticket
    • Affects Orchestrator, opened tracker ticket
    • No impact that needs to be tracked
  • Documentation (select at least one)
    • Updates relevant documentation
    • No doc update is appropriate
  • Tests (select at least one)
Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test dashboard cephadm
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox
  • jenkins test windows

@github-actions github-actions bot added the cephfs Ceph File System label Sep 29, 2023
@dparmar18 dparmar18 force-pushed the wip-62953 branch 29 times, most recently from 4a8334d to a090a73 Compare September 29, 2023 18:09
@vshankar
Copy link
Contributor

vshankar commented Nov 6, 2023

@dparmar18 https://pulpito.ceph.com/vshankar-2023-11-06_09:35:56-fs:upgrade-wip-vshankar-testing-20231106.073650-testing-default-smithi/

PRs in test - https://github.com/ceph/ceph/pulls?q=is%3Aopen+label%3Acephfs+label%3Awip-vshankar-testing

The (one) failure is the one you mentioned in slack:

2023-11-06T11:02:29.349 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:2023-11-06T11:02:29.348+0000 7faeb63d75c0 -1 init, newargv = 0x55aabe5e3c30 newargc=15
2023-11-06T11:02:29.349 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:ceph-fuse[139339]: starting ceph client
2023-11-06T11:02:29.353 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of_buffer'
2023-11-06T11:02:29.353 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:  what():  End of buffer [buffer:2]
2023-11-06T11:02:29.353 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:*** Caught signal (Aborted) **
2023-11-06T11:02:29.353 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: in thread 7fae8effd700 thread_name:ms_dispatch
2023-11-06T11:02:29.354 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: ceph version 18.0.0-7083-g85bdb240 (85bdb240f5cf2a758acc9b247951fbc8f5799bc1) reef (dev)
2023-11-06T11:02:29.354 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 1: /lib64/libpthread.so.0(+0x12cf0) [0x7faeb3012cf0]
2023-11-06T11:02:29.358 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 2: gsignal()
2023-11-06T11:02:29.358 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 3: abort()
2023-11-06T11:02:29.358 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 4: /lib64/libstdc++.so.6(+0x9009b) [0x7faeb149009b]
2023-11-06T11:02:29.358 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 5: /lib64/libstdc++.so.6(+0x9653c) [0x7faeb149653c]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 6: /lib64/libstdc++.so.6(+0x96597) [0x7faeb1496597]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 7: /lib64/libstdc++.so.6(+0x967f8) [0x7faeb14967f8]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 8: (ceph::buffer::v15_2_0::list::iterator_impl<true>::copy(unsigned int, char*)+0xa5) [0x7faeb489f475]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 9: (MDSMap::decode(ceph::buffer::v15_2_0::list::iterator_impl<true>&)+0xa54) [0x7faeb4ac8084]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 10: (Client::handle_mds_map(boost::intrusive_ptr<MMDSMap const> const&)+0x523) [0x55aabd727823]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 11: (Client::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x51d) [0x55aabd72881d]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 12: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0x478) [0x7faeb47419c8]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 13: (DispatchQueue::entry()+0x51f) [0x7faeb473eb4f]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 14: (DispatchQueue::DispatchThread::entry()+0x11) [0x7faeb480c3d1]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 15: /lib64/libpthread.so.0(+0x81ca) [0x7faeb30081ca]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 16: clone()

@dparmar18
Copy link
Contributor Author

2-upgrade.yaml doesn't make sense in fs/upgrade/upgraded_client sub-suite since this was aimed to test new client against old clusters, therefore the last commit made removed it along with few changes:

  • moved 0-clients/ from tasks/3-workload/new_ops/ to tasks/ and renamed it to 2-clients/ (i.e. replacing 2-upgrade.yaml)
  • since new_ops/ and stress_tests/ now share the common upgrade yaml, moved the tests yamls(in stress_tests/) directly within 3-workload/stress_tests/
  • renamed 1-client-sanity.yaml in new_ops/ to newops.yaml (however better option was remove the new_ops dir and directly have the newops.yaml but opted for this since we might want to add new yamls in the dir?)

@vshankar
Copy link
Contributor

vshankar commented Nov 7, 2023

@dparmar18 https://pulpito.ceph.com/vshankar-2023-11-06_09:35:56-fs:upgrade-wip-vshankar-testing-20231106.073650-testing-default-smithi/

PRs in test - https://github.com/ceph/ceph/pulls?q=is%3Aopen+label%3Acephfs+label%3Awip-vshankar-testing

The (one) failure is the one you mentioned in slack:

2023-11-06T11:02:29.349 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:2023-11-06T11:02:29.348+0000 7faeb63d75c0 -1 init, newargv = 0x55aabe5e3c30 newargc=15
2023-11-06T11:02:29.349 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:ceph-fuse[139339]: starting ceph client
2023-11-06T11:02:29.353 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of_buffer'
2023-11-06T11:02:29.353 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:  what():  End of buffer [buffer:2]
2023-11-06T11:02:29.353 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr:*** Caught signal (Aborted) **
2023-11-06T11:02:29.353 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: in thread 7fae8effd700 thread_name:ms_dispatch
2023-11-06T11:02:29.354 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: ceph version 18.0.0-7083-g85bdb240 (85bdb240f5cf2a758acc9b247951fbc8f5799bc1) reef (dev)
2023-11-06T11:02:29.354 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 1: /lib64/libpthread.so.0(+0x12cf0) [0x7faeb3012cf0]
2023-11-06T11:02:29.358 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 2: gsignal()
2023-11-06T11:02:29.358 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 3: abort()
2023-11-06T11:02:29.358 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 4: /lib64/libstdc++.so.6(+0x9009b) [0x7faeb149009b]
2023-11-06T11:02:29.358 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 5: /lib64/libstdc++.so.6(+0x9653c) [0x7faeb149653c]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 6: /lib64/libstdc++.so.6(+0x96597) [0x7faeb1496597]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 7: /lib64/libstdc++.so.6(+0x967f8) [0x7faeb14967f8]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 8: (ceph::buffer::v15_2_0::list::iterator_impl<true>::copy(unsigned int, char*)+0xa5) [0x7faeb489f475]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 9: (MDSMap::decode(ceph::buffer::v15_2_0::list::iterator_impl<true>&)+0xa54) [0x7faeb4ac8084]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 10: (Client::handle_mds_map(boost::intrusive_ptr<MMDSMap const> const&)+0x523) [0x55aabd727823]
2023-11-06T11:02:29.359 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 11: (Client::ms_dispatch2(boost::intrusive_ptr<Message> const&)+0x51d) [0x55aabd72881d]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 12: (Messenger::ms_deliver_dispatch(boost::intrusive_ptr<Message> const&)+0x478) [0x7faeb47419c8]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 13: (DispatchQueue::entry()+0x51f) [0x7faeb473eb4f]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 14: (DispatchQueue::DispatchThread::entry()+0x11) [0x7faeb480c3d1]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 15: /lib64/libpthread.so.0(+0x81ca) [0x7faeb30081ca]
2023-11-06T11:02:29.360 INFO:tasks.cephfs.fuse_mount.ceph-fuse.0.smithi150.stderr: 16: clone()

@dparmar18 This crash is due to absence of PR #53340 in reef. The MDS will encode the following two fields in the MDSMap with version=17:

  encode(max_xattr_size, bl);
  encode(bal_rank_mask, bl);

But the client running reef version, will decode like:

  if (ev >= 17) {
    decode(bal_rank_mask, p);
  }

and then the final DECODE_FINISH(p); will result in this crash. See discussion here - #46357 (comment)

@dparmar18
Copy link
Contributor Author

@vshankar
Copy link
Contributor

vshankar commented Nov 7, 2023

things went fine https://pulpito.ceph.com/dparmar-2023-11-07_06:18:43-fs:upgrade-main-distro-default-smithi/

Right, but we found an issue which eventually would have been resolved :)

@dparmar18
Copy link
Contributor Author

Short summary of the changes made in this PR:

  • upgrade of the cluster/client is limited to at max N+2 from the base version i.e. the upgrades in upgrade suite as now range bounded
  • removal of 2-upgrade.yaml from upgrade/upgraded_client/ cause the aim of the sub-suite (at time of writting qa: test new client with old cluster #48280 ) is to test newer client against an old cluster.
  • some refactoring to remove redundant upgrade yaml found in upgrade/upgraded_client/tasks/3-workload/stress_tests and upgrade/upgraded_client/tasks/3-workload/new_ops (now both will make use of single global client upgrade yaml)
  • renaming of yaml in upgrade/upgraded_client/tasks/3-workload/new_ops to new_ops.yaml

@dparmar18
Copy link
Contributor Author

dir qa/suites/fs/upgrade/upgraded_client/

tree before this patch:

├── %
├── bluestore-bitmap.yaml -> ../../../../cephfs/objectstore-ec/bluestore-bitmap.yaml
├── centos_8.stream.yaml -> .qa/distros/all/centos_8.stream.yaml
├── clusters
│   ├── %
│   └── 1-mds-1-client-micro.yaml -> .qa/cephfs/clusters/1-mds-1-client-micro.yaml
├── conf -> .qa/cephfs/conf/
├── overrides
│   ├── %
│   ├── ignorelist_health.yaml -> .qa/cephfs/overrides/ignorelist_health.yaml
│   ├── ignorelist_wrongly_marked_down.yaml -> .qa/cephfs/overrides/ignorelist_wrongly_marked_down.yaml
│   └── pg-warn.yaml
└── tasks
    ├── %
    ├── 0-from
    │   ├── nautilus.yaml
    │   └── pacific.yaml
    ├── 1-mount
    │   └── mount -> .qa/cephfs/mount/
    └── 2-workload
        ├── new_ops
        │   ├── %
        │   ├── 0-clients
        │   │   ├── fuse-upgrade.yaml
        │   │   └── kclient.yaml
        │   └── 1-client-sanity.yaml
        └── stress_tests
            ├── %
            ├── 0-client-upgrade.yaml
            └── 1-tests
                ├── blogbench.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/blogbench.yaml
                ├── dbench.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/dbench.yaml
                ├── fsstress.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/fsstress.yaml
                ├── iozone.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/iozone.yaml
                └── kernel_untar_build.yaml -> .qa/suites/fs/workload/tasks/5-workunit/kernel_untar_build.yaml

tree after this patch:

├── %
├── bluestore-bitmap.yaml -> ../../../../cephfs/objectstore-ec/bluestore-bitmap.yaml
├── branch
│   ├── nautilus.yaml
│   └── pacific.yaml
├── centos_8.stream.yaml -> .qa/distros/all/centos_8.stream.yaml
├── clusters
│   ├── %
│   └── 1-mds-1-client-micro.yaml -> .qa/cephfs/clusters/1-mds-1-client-micro.yaml
├── conf -> .qa/cephfs/conf/
├── overrides
│   ├── %
│   ├── ignorelist_health.yaml -> .qa/cephfs/overrides/ignorelist_health.yaml
│   ├── ignorelist_wrongly_marked_down.yaml -> .qa/cephfs/overrides/ignorelist_wrongly_marked_down.yaml
│   └── pg-warn.yaml
└── tasks
    ├── %
    ├── 0-install.yaml
    ├── 1-mount
    │   └── mount -> .qa/cephfs/mount/
    ├── 2-clients
    │   ├── fuse-upgrade.yaml
    │   └── kclient.yaml
    └── 3-workload
        ├── new_ops
        │   ├── %
        │   └── newops.yaml
        └── stress_tests
            ├── blogbench.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/blogbench.yaml
            ├── dbench.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/dbench.yaml
            ├── fsstress.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/fsstress.yaml
            ├── iozone.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/iozone.yaml
            └── kernel_untar_build.yaml -> .qa/suites/fs/workload/tasks/5-workunit/kernel_untar_build.yaml

@vshankar
Copy link
Contributor

vshankar commented Nov 8, 2023

dir qa/suites/fs/upgrade/upgraded_client/

tree before this patch:

├── %
├── bluestore-bitmap.yaml -> ../../../../cephfs/objectstore-ec/bluestore-bitmap.yaml
├── centos_8.stream.yaml -> .qa/distros/all/centos_8.stream.yaml
├── clusters
│   ├── %
│   └── 1-mds-1-client-micro.yaml -> .qa/cephfs/clusters/1-mds-1-client-micro.yaml
├── conf -> .qa/cephfs/conf/
├── overrides
│   ├── %
│   ├── ignorelist_health.yaml -> .qa/cephfs/overrides/ignorelist_health.yaml
│   ├── ignorelist_wrongly_marked_down.yaml -> .qa/cephfs/overrides/ignorelist_wrongly_marked_down.yaml
│   └── pg-warn.yaml
└── tasks
    ├── %
    ├── 0-from
    │   ├── nautilus.yaml
    │   └── pacific.yaml
    ├── 1-mount
    │   └── mount -> .qa/cephfs/mount/
    └── 2-workload
        ├── new_ops
        │   ├── %
        │   ├── 0-clients
        │   │   ├── fuse-upgrade.yaml
        │   │   └── kclient.yaml
        │   └── 1-client-sanity.yaml
        └── stress_tests
            ├── %
            ├── 0-client-upgrade.yaml
            └── 1-tests
                ├── blogbench.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/blogbench.yaml
                ├── dbench.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/dbench.yaml
                ├── fsstress.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/fsstress.yaml
                ├── iozone.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/iozone.yaml
                └── kernel_untar_build.yaml -> .qa/suites/fs/workload/tasks/5-workunit/kernel_untar_build.yaml

tree after this patch:

├── %
├── bluestore-bitmap.yaml -> ../../../../cephfs/objectstore-ec/bluestore-bitmap.yaml
├── branch
│   ├── nautilus.yaml
│   └── pacific.yaml
├── centos_8.stream.yaml -> .qa/distros/all/centos_8.stream.yaml
├── clusters
│   ├── %
│   └── 1-mds-1-client-micro.yaml -> .qa/cephfs/clusters/1-mds-1-client-micro.yaml
├── conf -> .qa/cephfs/conf/
├── overrides
│   ├── %
│   ├── ignorelist_health.yaml -> .qa/cephfs/overrides/ignorelist_health.yaml
│   ├── ignorelist_wrongly_marked_down.yaml -> .qa/cephfs/overrides/ignorelist_wrongly_marked_down.yaml
│   └── pg-warn.yaml
└── tasks
    ├── %
    ├── 0-install.yaml
    ├── 1-mount
    │   └── mount -> .qa/cephfs/mount/
    ├── 2-clients
    │   ├── fuse-upgrade.yaml
    │   └── kclient.yaml
    └── 3-workload
        ├── new_ops
        │   ├── %
        │   └── newops.yaml
        └── stress_tests
            ├── blogbench.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/blogbench.yaml
            ├── dbench.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/dbench.yaml
            ├── fsstress.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/fsstress.yaml
            ├── iozone.yaml -> .qa/suites/fs/workload/tasks/5-workunit/suites/iozone.yaml
            └── kernel_untar_build.yaml -> .qa/suites/fs/workload/tasks/5-workunit/kernel_untar_build.yaml

@dparmar18 Could you explain (in the commit message) why there is a "topology" change in the way the yamls are structured? IIUC, there are changes that auto-select branch names, but it would help to be clear in the commit message.

@dparmar18
Copy link
Contributor Author

@dparmar18 Could you explain (in the commit message) why there is a "topology" change in the way the yamls are structured? IIUC, there are changes that auto-select branch names, but it would help to be clear in the commit message.

the cluster should never be upgraded in upgrade/upgraded_client/ since the aim of the sub-suite is to test newer client against an old cluster therefore removed 2-upgrade.yaml from the dir.

upgrade/upgraded_client/tasks/3-workload/stress_tests and upgrade/upgraded_client/tasks/3-workload/new_ops had a upgrade yaml which can be avoided and a single upgrade yaml can be used, so there is no upgraded_client/tasks/3-workload/new_ops/0-clients and upgrade/upgraded_client/tasks/3-workload/stress_tests/0-client-upgrade.yaml and thus that complication has been removed, now since there is no 0-client-upgrade.yaml, upgrade/upgraded_client/tasks/3-workload/stress_tests/1-tests's tests have been directly under upgrade/upgraded_client/tasks/3-workload/stress_tests

the new ops yaml in upgrade/upgraded_client/tasks/3-workload/new_ops was named 1-client-sanity.yaml which is renamed to new_ops.yaml

@dparmar18
Copy link
Contributor Author

@vshankar the explanation is there in the commit already, do let me know if its not enough

* start testing new_ops and stress_tests with both the drivers(i.e. fuse and kclient)
therefore moved 0-clients/ from tasks/3-workload/new_ops/ to tasks/ and renamed it to
2-clients/

* since new_ops/ and stress_tests/ now share the common upgrade yaml, moved the
tests yamls(in stress_tests/1-tests) directly under 3-workload/stress_tests/

* renamed 1-client-sanity.yaml in new_ops/ to newops.yaml

Fixes: https://tracker.ceph.com/issues/62953
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
@vshankar
Copy link
Contributor

@vshankar the explanation is there in the commit already, do let me know if its not enough

I see you updated the commit message. Nice clear description now - appreciate it @dparmar18

@vshankar
Copy link
Contributor

Putting this to test since the fix has been merged.

vshankar added a commit to vshankar/ceph that referenced this pull request Jan 25, 2024
* refs/pull/53734/head:
	qa: refactor client upgrade yamls and other minor touchups
	qa/upgrade/nofs: upgrade pacific->reef
	qa/upgrade/upgraded_client: upgrade nautilus->pacific and pacific->reef
Copy link
Contributor

@vshankar vshankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vshankar vshankar merged commit 40cb741 into ceph:main Jan 29, 2024
10 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cephfs Ceph File System needs-review
Projects
None yet
2 participants