Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ceph 17.2.4 doesnt build #1

Closed
0xFEEDC0DE64 opened this issue Oct 26, 2022 · 10 comments
Closed

ceph 17.2.4 doesnt build #1

0xFEEDC0DE64 opened this issue Oct 26, 2022 · 10 comments

Comments

@0xFEEDC0DE64
Copy link

Hi,
my cluster is already running ceph 17.2.4 and since some arch upgrades, some dependency libraries were updated too and I cannot rebuild ceph anymore. I know this repo only builds ceph 16, but maybe you have an idea why it doesnt build with the newer version anymore?

would a c-style cast do the trick here?

In file included from /usr/include/arrow/array/data.h:27,
                 from /usr/include/arrow/array/array_base.h:26,
                 from /usr/include/arrow/array.h:37,
                 from /usr/include/arrow/api.h:22,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h:11,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_oper.h:16,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select.h:12,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/rgw/rgw_s3select_private.h:35,
                 from /home/feedc0de/ceph-arch/src/ceph-17.2.4/src/rgw/rgw_s3select.cc:4:
/home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h: In member function ‘virtual arrow::Status arrow::io::OSFile::OpenWritable(const std::string&, bool, bool, bool)’:
/home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h:199:5: error: cannot convert ‘arrow::internal::FileDescriptor’ to ‘int’ in assignment
  199 |     ARROW_ASSIGN_OR_RAISE(fd_, ::arrow::internal::FileOpenWritable(file_name_, write_only,
      |     ^~~~~~~~~~~~~~~~~~~~~
      |     |
      |     arrow::internal::FileDescriptor
/home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h: In member function ‘virtual arrow::Status arrow::io::OSFile::OpenReadable(const std::string&)’:
/home/feedc0de/ceph-arch/src/ceph-17.2.4/src/s3select/include/s3select_parquet_intrf.h:232:5: error: cannot convert ‘arrow::internal::FileDescriptor’ to ‘int’ in assignment
  232 |     ARROW_ASSIGN_OR_RAISE(fd_, ::arrow::internal::FileOpenReadable(file_name_));
      |     ^~~~~~~~~~~~~~~~~~~~~
      |     |
      |     arrow::internal::FileDescriptor
@bazaah
Copy link
Owner

bazaah commented Oct 26, 2022

I don't build with -DWITH_RADOSGW_SELECT_PARQUET=ON, but I do have a feature branch open for v17.2.5 which is currently green for builds.

I'm actively testing it in a test env at the moment, so you might want to take a look at what has been done there. You can take it and experiment with setting the select define if you like.

See: feature/v17.2.4

@0xFEEDC0DE64
Copy link
Author

I also had an compiler error with your feature branch

[  2%] Generating ceph-exporter_options.cc, ../../../include/ceph-exporter_legacy_options.h
cd /home/feedc0de/aur-ceph/src/ceph-17.2.5/build/src/common/options && /usr/bin/python3.10 /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/options/y2c.py --input /home/feedc0de/aur-ceph/src/ceph-17.2.5/build/src/common/options/ceph-exporter.yaml --output ceph-exporter_options.cc -
-legacy /home/feedc0de/aur-ceph/src/ceph-17.2.5/build/include/ceph-exporter_legacy_options.h --name ceph-exporter
In file included from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/config_values.h:59,
                 from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/config.h:28,
                 from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/config_proxy.h:6,
                 from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/ceph_context.h:41,
                 from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/librados/snap_set_diff.cc:7:
/home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/options/legacy_config_opts.h:1:10: fatal error: global_legacy_options.h: No such file or directory
    1 | #include "global_legacy_options.h"
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [src/CMakeFiles/rados_snap_set_diff_obj.dir/build.make:79: src/CMakeFiles/rados_snap_set_diff_obj.dir/librados/snap_set_diff.cc.o] Error 1
make[2]: Leaving directory '/home/feedc0de/aur-ceph/src/ceph-17.2.5/build'
make[1]: *** [CMakeFiles/Makefile2:3657: src/CMakeFiles/rados_snap_set_diff_obj.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

@0xFEEDC0DE64
Copy link
Author

on a second try to build it worked, I have some aarch64 packages ready to test :)

@0xFEEDC0DE64
Copy link
Author

OSD crashes

~ sudo /usr/bin/ceph-osd -f --cluster ceph --id 12 --setuser ceph --setgroup ceph
2022-10-28T08:20:32.147+0000 ffff88601040 -1 Falling back to public interface
2022-10-28T08:23:49.958+0000 ffff88601040 -1 osd.12 27983 log_to_monitors true
2022-10-28T08:23:49.982+0000 ffff88601040 -1 osd.12 27983 mon_cmd_maybe_osd_create fail: 'osd.12 has already bound to class 'hdd', can not reset class to 'ssd'; use 'ceph osd crush rm-device-class <id>' to remove old class first': (16) Device or resource busy
2022-10-28T08:23:53.642+0000 ffff88601040 -1 bdev(0xaaaabc6ba400 /var/lib/ceph/osd/ceph-12/block) aio_submit retries 1
2022-10-28T08:24:30.476+0000 ffff7b8fc600 -1 osd.12 28243 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
/home/feedc0de/aur-ceph/src/ceph-17.2.5/src/osd/OSD.cc: In function 'void OSD::do_recovery(PG*, epoch_t, uint64_t, ThreadPool::TPHandle&)' thread ffff6765c600 time 2022-10-28T08:24:36.200440+0000
/home/feedc0de/aur-ceph/src/ceph-17.2.5/src/osd/OSD.cc: 9676: FAILED ceph_assert(started <= reserved_pushes)
 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x134) [0xaaaab9e8bf28]
 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaab9e8c09c]
 3: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x4ec) [0xaaaab9f3ed10]
 4: (ceph::osd::scheduler::PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x28) [0xaaaaba1b4c08]
 5: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x46c) [0xaaaab9f3f25c]
 6: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaaba591af8]
 7: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaaba594368]
 8: /usr/lib/libc.so.6(+0x80aec) [0xffff87940aec]
 9: /usr/lib/libc.so.6(+0xea5dc) [0xffff879aa5dc]
*** Caught signal (Aborted) **
 in thread ffff6765c600 thread_name:tp_osd_tp
2022-10-28T08:24:36.204+0000 ffff6765c600 -1 /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/osd/OSD.cc: In function 'void OSD::do_recovery(PG*, epoch_t, uint64_t, ThreadPool::TPHandle&)' thread ffff6765c600 time 2022-10-28T08:24:36.200440+0000
/home/feedc0de/aur-ceph/src/ceph-17.2.5/src/osd/OSD.cc: 9676: FAILED ceph_assert(started <= reserved_pushes)

 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x134) [0xaaaab9e8bf28]
 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaab9e8c09c]
 3: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x4ec) [0xaaaab9f3ed10]
 4: (ceph::osd::scheduler::PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x28) [0xaaaaba1b4c08]
 5: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x46c) [0xaaaab9f3f25c]
 6: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaaba591af8]
 7: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaaba594368]
 8: /usr/lib/libc.so.6(+0x80aec) [0xffff87940aec]
 9: /usr/lib/libc.so.6(+0xea5dc) [0xffff879aa5dc]

 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
 1: __kernel_rt_sigreturn()
 2: /usr/lib/libc.so.6(+0x82790) [0xffff87942790]
 3: raise()
 4: abort()
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x188) [0xaaaab9e8bf7c]
 6: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaab9e8c09c]
 7: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x4ec) [0xaaaab9f3ed10]
 8: (ceph::osd::scheduler::PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x28) [0xaaaaba1b4c08]
 9: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x46c) [0xaaaab9f3f25c]
 10: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaaba591af8]
 11: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaaba594368]
 12: /usr/lib/libc.so.6(+0x80aec) [0xffff87940aec]
 13: /usr/lib/libc.so.6(+0xea5dc) [0xffff879aa5dc]
2022-10-28T08:24:36.208+0000 ffff6765c600 -1 *** Caught signal (Aborted) **
 in thread ffff6765c600 thread_name:tp_osd_tp

 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
 1: __kernel_rt_sigreturn()
 2: /usr/lib/libc.so.6(+0x82790) [0xffff87942790]
 3: raise()
 4: abort()
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x188) [0xaaaab9e8bf7c]
 6: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaab9e8c09c]
 7: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x4ec) [0xaaaab9f3ed10]
 8: (ceph::osd::scheduler::PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x28) [0xaaaaba1b4c08]
  9: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x46c) [0xaaaab9f3f25c]
 10: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaaba591af8]
 11: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaaba594368]
 12: /usr/lib/libc.so.6(+0x80aec) [0xffff87940aec]
 13: /usr/lib/libc.so.6(+0xea5dc) [0xffff879aa5dc]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

 -7284> 2022-10-28T08:24:30.476+0000 ffff7b8fc600 -1 osd.12 28243 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
   -10> 2022-10-28T08:24:36.204+0000 ffff6765c600 -1 /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/osd/OSD.cc: In function 'void OSD::do_recovery(PG*, epoch_t, uint64_t, ThreadPool::TPHandle&)' thread ffff6765c600 time 2022-10-28T08:24:36.200440+0000
/home/feedc0de/aur-ceph/src/ceph-17.2.5/src/osd/OSD.cc: 9676: FAILED ceph_assert(started <= reserved_pushes)

 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x134) [0xaaaab9e8bf28]
 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaab9e8c09c]
 3: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x4ec) [0xaaaab9f3ed10]
 4: (ceph::osd::scheduler::PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x28) [0xaaaaba1b4c08]
 5: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x46c) [0xaaaab9f3f25c]
 6: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaaba591af8]
 7: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaaba594368]
 8: /usr/lib/libc.so.6(+0x80aec) [0xffff87940aec]
 9: /usr/lib/libc.so.6(+0xea5dc) [0xffff879aa5dc]

     0> 2022-10-28T08:24:36.208+0000 ffff6765c600 -1 *** Caught signal (Aborted) **
 in thread ffff6765c600 thread_name:tp_osd_tp

 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
 1: __kernel_rt_sigreturn()
 2: /usr/lib/libc.so.6(+0x82790) [0xffff87942790]
 3: raise()
 4: abort()
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x188) [0xaaaab9e8bf7c]
 6: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaab9e8c09c]
 7: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x4ec) [0xaaaab9f3ed10]
 8: (ceph::osd::scheduler::PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x28) [0xaaaaba1b4c08]
 9: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x46c) [0xaaaab9f3f25c]
 10: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaaba591af8]
 11: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaaba594368]
 12: /usr/lib/libc.so.6(+0x80aec) [0xffff87940aec]
 13: /usr/lib/libc.so.6(+0xea5dc) [0xffff879aa5dc]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

 -9999> 2022-10-28T08:24:30.476+0000 ffff7b8fc600 -1 osd.12 28243 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
 -9998> 2022-10-28T08:24:36.204+0000 ffff6765c600 -1 /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/osd/OSD.cc: In function 'void OSD::do_recovery(PG*, epoch_t, uint64_t, ThreadPool::TPHandle&)' thread ffff6765c600 time 2022-10-28T08:24:36.200440+0000
/home/feedc0de/aur-ceph/src/ceph-17.2.5/src/osd/OSD.cc: 9676: FAILED ceph_assert(started <= reserved_pushes)

 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x134) [0xaaaab9e8bf28]
 2: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaab9e8c09c]
 3: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x4ec) [0xaaaab9f3ed10]
 4: (ceph::osd::scheduler::PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x28) [0xaaaaba1b4c08]
 5: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x46c) [0xaaaab9f3f25c]
 6: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaaba591af8]
 7: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaaba594368]
 8: /usr/lib/libc.so.6(+0x80aec) [0xffff87940aec]
 9: /usr/lib/libc.so.6(+0xea5dc) [0xffff879aa5dc]

 -9997> 2022-10-28T08:24:36.208+0000 ffff6765c600 -1 *** Caught signal (Aborted) **
 in thread ffff6765c600 thread_name:tp_osd_tp

 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
 1: __kernel_rt_sigreturn()
 2: /usr/lib/libc.so.6(+0x82790) [0xffff87942790]
 3: raise()
 4: abort()
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x188) [0xaaaab9e8bf7c]
 6: (ceph::__ceph_assertf_fail(char const*, char const*, int, char const*, char const*, ...)+0) [0xaaaab9e8c09c]
 7: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x4ec) [0xaaaab9f3ed10]
 8: (ceph::osd::scheduler::PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x28) [0xaaaaba1b4c08]
 9: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x46c) [0xaaaab9f3f25c]
 10: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x308) [0xaaaaba591af8]
 11: (ShardedThreadPool::WorkThreadSharded::entry()+0x18) [0xaaaaba594368]
 12: /usr/lib/libc.so.6(+0x80aec) [0xffff87940aec]
 13: /usr/lib/libc.so.6(+0xea5dc) [0xffff879aa5dc]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

[1]    23143 IOT instruction  sudo /usr/bin/ceph-osd -f --cluster ceph --id 12 --setuser ceph --setgroup

@bazaah
Copy link
Owner

bazaah commented Oct 29, 2022

That's surprising, I have two test clusters I've been doing perf benchmarking using the most recent iteration of this branch.

Are you using the most recent HEAD? I tend to do a lot of rebase/force-push-ing on my feature branches.

I'll rebuild the packages today in a clean chroot just to make sure

@bazaah
Copy link
Owner

bazaah commented Oct 29, 2022

I also had an compiler error with your feature branch

[  2%] Generating ceph-exporter_options.cc, ../../../include/ceph-exporter_legacy_options.h
cd /home/feedc0de/aur-ceph/src/ceph-17.2.5/build/src/common/options && /usr/bin/python3.10 /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/options/y2c.py --input /home/feedc0de/aur-ceph/src/ceph-17.2.5/build/src/common/options/ceph-exporter.yaml --output ceph-exporter_options.cc -
-legacy /home/feedc0de/aur-ceph/src/ceph-17.2.5/build/include/ceph-exporter_legacy_options.h --name ceph-exporter
In file included from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/config_values.h:59,
                 from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/config.h:28,
                 from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/config_proxy.h:6,
                 from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/ceph_context.h:41,
                 from /home/feedc0de/aur-ceph/src/ceph-17.2.5/src/librados/snap_set_diff.cc:7:
/home/feedc0de/aur-ceph/src/ceph-17.2.5/src/common/options/legacy_config_opts.h:1:10: fatal error: global_legacy_options.h: No such file or directory
    1 | #include "global_legacy_options.h"
      |          ^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[2]: *** [src/CMakeFiles/rados_snap_set_diff_obj.dir/build.make:79: src/CMakeFiles/rados_snap_set_diff_obj.dir/librados/snap_set_diff.cc.o] Error 1
make[2]: Leaving directory '/home/feedc0de/aur-ceph/src/ceph-17.2.5/build'
make[1]: *** [CMakeFiles/Makefile2:3657: src/CMakeFiles/rados_snap_set_diff_obj.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

You probably encountered another part of the racy legacy header generation that the upstream does since quincy. See https://github.com/bazaah/aur-ceph/blob/feature/v17.2.4/ceph-17.2.4-compressor-common-depends.patch for another example. Fortunately, the fix is pretty simple.

I'll probably add a patch for that in a pkgrel=2 version.

@bazaah
Copy link
Owner

bazaah commented Oct 29, 2022

rebuilt + tested, still no issues. Unless you can provide me with a reproducible test failure, I'm going to chalk this up to environment.

@0xFEEDC0DE64
Copy link
Author

well, I nuked the failing OSD and recreated it on the same disk, the rest seems to be working fine, thank you so much for providing your work here.

My cluster only consists of aarch64 machines (odroids, raspberries, and others), and the produced binaries work across all of them.

Just out of curiosity, how much hassle could we have saved by using the containerized approach for ceph?

@bazaah
Copy link
Owner

bazaah commented Oct 30, 2022

Just out of curiosity, how much hassle could we have saved by using the containerized approach for ceph?

cephadm (the binary) is a big ol' ball of python, so not much. I've experimented with it, and straight up refuse to have

  1. A giant tangle of python+cython
  2. containerized networking

between me and the storage layer for my whole org. Just cranky sysadmin behavior, I guess.

Besides, for my specific usecase having an actual librbd.so is important because of how we use qemu.

@bazaah
Copy link
Owner

bazaah commented Oct 30, 2022

I'm closing this as I finally got a clean make check run:

Make check stdout

ctest
Test project /build/ceph/src/ceph-17.2.5/build
Start 2: setup-venv-for-mgr
Start 5: setup-venv-for-mgr-dashboard-py3
Start 8: setup-venv-for-mgr-dashboard-lint
Start 11: setup-venv-for-mgr-dashboard-check
Start 14: setup-venv-for-mgr-dashboard-openapi
Start 17: setup-venv-for-python-common
Start 20: setup-venv-for-cephfs-shell
1/228 Test #20: setup-venv-for-cephfs-shell ............... Passed 51.96 sec
Start 23: setup-venv-for-cephfs-top
2/228 Test #23: setup-venv-for-cephfs-top ................. Passed 37.11 sec
Start 226: setup-venv-for-qa
3/228 Test #17: setup-venv-for-python-common .............. Passed 101.68 sec
Start 19: run-tox-python-common
4/228 Test #226: setup-venv-for-qa ......................... Passed 32.20 sec
Start 22: run-tox-cephfs-shell
5/228 Test #22: run-tox-cephfs-shell ...................... Passed 16.59 sec
Start 25: run-tox-cephfs-top
6/228 Test #25: run-tox-cephfs-top ........................ Passed 14.45 sec
Start 228: run-tox-qa
7/228 Test #19: run-tox-python-common ..................... Passed 103.35 sec
Start 1: validate-options
8/228 Test #1: validate-options .......................... Passed 2.24 sec
Start 18: teardown-venv-for-python-common
9/228 Test #18: teardown-venv-for-python-common ........... Passed 0.35 sec
Start 21: teardown-venv-for-cephfs-shell
10/228 Test #21: teardown-venv-for-cephfs-shell ............ Passed 0.71 sec
Start 24: teardown-venv-for-cephfs-top
11/228 Test #24: teardown-venv-for-cephfs-top .............. Passed 0.04 sec
Start 26: run-rbd-unit-tests-N.sh
12/228 Test #5: setup-venv-for-mgr-dashboard-py3 .......... Passed 223.11 sec
Start 7: run-tox-mgr-dashboard-py3
13/228 Test #11: setup-venv-for-mgr-dashboard-check ........ Passed 223.24 sec
Start 13: run-tox-mgr-dashboard-check
14/228 Test #14: setup-venv-for-mgr-dashboard-openapi ...... Passed 223.98 sec
Start 16: run-tox-mgr-dashboard-openapi
15/228 Test #8: setup-venv-for-mgr-dashboard-lint ......... Passed 224.59 sec
Start 10: run-tox-mgr-dashboard-lint
16/228 Test #2: setup-venv-for-mgr ........................ Passed 253.05 sec
Start 4: run-tox-mgr
17/228 Test #26: run-rbd-unit-tests-N.sh ................... Passed 46.12 sec
Start 27: run-rbd-unit-tests-0.sh
18/228 Test #27: run-rbd-unit-tests-0.sh ................... Passed 43.40 sec
Start 28: run-rbd-unit-tests-1.sh
19/228 Test #16: run-tox-mgr-dashboard-openapi ............. Passed 97.58 sec
Start 15: teardown-venv-for-mgr-dashboard-openapi
20/228 Test #15: teardown-venv-for-mgr-dashboard-openapi ... Passed 0.11 sec
Start 29: run-rbd-unit-tests-61.sh
21/228 Test #13: run-tox-mgr-dashboard-check ............... Passed 110.64 sec
Start 12: teardown-venv-for-mgr-dashboard-check
22/228 Test #12: teardown-venv-for-mgr-dashboard-check ..... Passed 0.19 sec
Start 30: run-rbd-unit-tests-109.sh
23/228 Test #28: run-rbd-unit-tests-1.sh ................... Passed 44.85 sec
Start 31: run-rbd-unit-tests-127.sh
24/228 Test #29: run-rbd-unit-tests-61.sh .................. Passed 47.37 sec
Start 32: run-cli-tests
25/228 Test #30: run-rbd-unit-tests-109.sh ................. Passed 45.19 sec
Start 33: unittest_admin_socket
26/228 Test #33: unittest_admin_socket ..................... Passed 10.10 sec
Start 34: unittest_encoding
27/228 Test #34: unittest_encoding ......................... Passed 0.08 sec
Start 35: unittest_addrs
28/228 Test #31: run-rbd-unit-tests-127.sh ................. Passed 49.18 sec
Start 36: unittest_auth
29/228 Test #36: unittest_auth ............................. Passed 0.03 sec
Start 37: unittest_workqueue
30/228 Test #35: unittest_addrs ............................ Passed 2.76 sec
Start 38: unittest_striper
31/228 Test #38: unittest_striper .......................... Passed 0.01 sec
Start 39: unittest_str_list
32/228 Test #39: unittest_str_list ......................... Passed 0.01 sec
Start 40: unittest_log
33/228 Test #40: unittest_log .............................. Passed 1.10 sec
Start 41: unittest_base64
34/228 Test #41: unittest_base64 ........................... Passed 0.02 sec
Start 42: unittest_ceph_argparse
35/228 Test #42: unittest_ceph_argparse .................... Passed 0.01 sec
Start 43: unittest_ceph_compatset
36/228 Test #43: unittest_ceph_compatset ................... Passed 0.01 sec
Start 44: unittest_gather
37/228 Test #44: unittest_gather ........................... Passed 0.01 sec
Start 45: unittest_run_cmd
38/228 Test #45: unittest_run_cmd .......................... Passed 0.01 sec
Start 46: unittest_signals
39/228 Test #46: unittest_signals .......................... Passed 2.01 sec
Start 47: unittest_simple_spin
40/228 Test #47: unittest_simple_spin ...................... Passed 0.54 sec
Start 48: unittest_bufferlist
41/228 Test #37: unittest_workqueue ........................ Passed 7.16 sec
Start 49: compiletest_cxx11_client
42/228 Test #49: compiletest_cxx11_client .................. Passed 0.00 sec
Start 50: unittest_xlist
43/228 Test #50: unittest_xlist ............................ Passed 0.02 sec
Start 51: unittest_arch
44/228 Test #51: unittest_arch ............................. Passed 0.02 sec
Start 52: unittest_denc
45/228 Test #52: unittest_denc ............................. Passed 0.01 sec
Start 53: unittest_mempool
46/228 Test #53: unittest_mempool .......................... Passed 0.07 sec
Start 54: unittest_features
47/228 Test #54: unittest_features ......................... Passed 0.01 sec
Start 55: unittest_crypto
48/228 Test #55: unittest_crypto ........................... Passed 0.53 sec
Start 56: unittest_crypto_init
49/228 Test #56: unittest_crypto_init ...................... Passed 0.00 sec
Start 57: unittest_perf_counters
50/228 Test #57: unittest_perf_counters .................... Passed 0.01 sec
Start 58: unittest_ceph_crypto
51/228 Test #58: unittest_ceph_crypto ...................... Passed 0.01 sec
Start 59: unittest_utf8
52/228 Test #59: unittest_utf8 ............................. Passed 0.01 sec
Start 60: unittest_mime
53/228 Test #60: unittest_mime ............................. Passed 0.19 sec
Start 61: unittest_escape
54/228 Test #61: unittest_escape ........................... Passed 0.05 sec
Start 62: unittest_strtol
55/228 Test #62: unittest_strtol ........................... Passed 0.16 sec
Start 63: unittest_confutils
56/228 Test #63: unittest_confutils ........................ Passed 0.21 sec
Start 64: unittest_heartbeatmap
57/228 Test #64: unittest_heartbeatmap ..................... Passed 2.01 sec
Start 65: unittest_formatter
58/228 Test #65: unittest_formatter ........................ Passed 0.01 sec
Start 66: unittest_daemon_config
59/228 Test #66: unittest_daemon_config .................... Passed 0.02 sec
Start 67: unittest_libcephfs_config
60/228 Test #67: unittest_libcephfs_config ................. Passed 0.21 sec
Start 68: unittest_rbd_replay
61/228 Test #68: unittest_rbd_replay ....................... Passed 0.01 sec
Start 69: unittest_ipaddr
62/228 Test #69: unittest_ipaddr ........................... Passed 0.01 sec
Start 70: unittest_utime
63/228 Test #70: unittest_utime ............................ Passed 0.12 sec
Start 71: unittest_texttable
64/228 Test #71: unittest_texttable ........................ Passed 0.11 sec
Start 72: unittest_on_exit
65/228 Test #72: unittest_on_exit .......................... Passed 0.01 sec
Start 73: unittest_subprocess
66/228 Test #10: run-tox-mgr-dashboard-lint ................ Passed 186.04 sec
Start 9: teardown-venv-for-mgr-dashboard-lint
67/228 Test #9: teardown-venv-for-mgr-dashboard-lint ...... Passed 0.52 sec
Start 74: unittest_pageset
68/228 Test #74: unittest_pageset .......................... Passed 0.15 sec
Start 75: unittest_random_string
69/228 Test #75: unittest_random_string .................... Passed 0.01 sec
Start 76: unittest_any
70/228 Test #76: unittest_any .............................. Passed 0.38 sec
Start 77: unittest_weighted_shuffle
71/228 Test #77: unittest_weighted_shuffle ................. Passed 0.11 sec
Start 78: unittest_intarith
72/228 Test #78: unittest_intarith ......................... Passed 0.08 sec
Start 79: unittest_blkdev
73/228 Test #73: unittest_subprocess ....................... Passed 9.22 sec
Start 80: unittest_counter
74/228 Test #79: unittest_blkdev ........................... Passed 0.78 sec
Start 81: unittest_numa
75/228 Test #81: unittest_numa ............................. Passed 0.03 sec
Start 82: unittest_bloom_filter
76/228 Test #82: unittest_bloom_filter ..................... Passed 1.64 sec
Start 83: unittest_lruset
77/228 Test #80: unittest_counter .......................... Passed 2.07 sec
Start 84: unittest_histogram
78/228 Test #83: unittest_lruset ........................... Passed 1.12 sec
Start 85: unittest_prioritized_queue
79/228 Test #84: unittest_histogram ........................ Passed 0.18 sec
Start 86: unittest_mclock_priority_queue
80/228 Test #86: unittest_mclock_priority_queue ............ Passed 0.13 sec
Start 87: unittest_str_map
81/228 Test #87: unittest_str_map .......................... Passed 0.02 sec
Start 88: unittest_json_formattable
82/228 Test #88: unittest_json_formattable ................. Passed 0.01 sec
Start 89: unittest_json_formatter
83/228 Test #89: unittest_json_formatter ................... Passed 0.01 sec
Start 90: unittest_sharedptr_registry
84/228 Test #90: unittest_sharedptr_registry ............... Passed 0.01 sec
Start 91: unittest_shared_cache
85/228 Test #91: unittest_shared_cache ..................... Passed 0.01 sec
Start 92: unittest_sloppy_crc_map
86/228 Test #92: unittest_sloppy_crc_map ................... Passed 0.01 sec
Start 93: unittest_time
87/228 Test #85: unittest_prioritized_queue ................ Passed 0.57 sec
Start 94: unittest_util
88/228 Test #93: unittest_time ............................. Passed 0.28 sec
Start 95: unittest_random
89/228 Test #94: unittest_util ............................. Passed 0.01 sec
Start 96: unittest_throttle
90/228 Test #95: unittest_random ........................... Passed 0.14 sec
Start 97: unittest_lru
91/228 Test #97: unittest_lru .............................. Passed 0.19 sec
Start 98: unittest_intrusive_lru
92/228 Test #98: unittest_intrusive_lru .................... Passed 0.21 sec
Start 99: unittest_crc32c
93/228 Test #32: run-cli-tests ............................. Passed 50.09 sec
Start 100: unittest_config
94/228 Test #100: unittest_config ........................... Passed 0.26 sec
Start 101: unittest_context
95/228 Test #101: unittest_context .......................... Passed 0.17 sec
Start 102: unittest_safe_io
96/228 Test #102: unittest_safe_io .......................... Passed 0.67 sec
Start 103: unittest_url_escape
97/228 Test #103: unittest_url_escape ....................... Passed 0.12 sec
Start 104: unittest_pretty_binary
98/228 Test #104: unittest_pretty_binary .................... Passed 0.40 sec
Start 105: unittest_readahead
99/228 Test #105: unittest_readahead ........................ Passed 0.18 sec
Start 106: unittest_tableformatter
100/228 Test #106: unittest_tableformatter ................... Passed 0.14 sec
Start 107: unittest_xmlformatter
101/228 Test #107: unittest_xmlformatter ..................... Passed 0.20 sec
Start 108: unittest_bit_vector
102/228 Test #108: unittest_bit_vector ....................... Passed 0.94 sec
Start 109: unittest_interval_map
103/228 Test #109: unittest_interval_map ..................... Passed 0.05 sec
Start 110: unittest_interval_set
104/228 Test #110: unittest_interval_set ..................... Passed 0.03 sec
Start 111: unittest_weighted_priority_queue
105/228 Test #111: unittest_weighted_priority_queue .......... Passed 0.06 sec
Start 112: unittest_shunique_lock
106/228 Test #112: unittest_shunique_lock .................... Passed 0.03 sec
Start 113: unittest_fair_mutex
107/228 Test #99: unittest_crc32c ........................... Passed 6.13 sec
Start 114: unittest_perf_histogram
108/228 Test #113: unittest_fair_mutex ....................... Passed 0.17 sec
Start 115: unittest_global_doublefree
109/228 Test #115: unittest_global_doublefree ................ Passed 0.00 sec
Start 116: unittest_dns_resolve
110/228 Test #114: unittest_perf_histogram ................... Passed 0.19 sec
Start 117: unittest_back_trace
111/228 Test #116: unittest_dns_resolve ...................... Passed 0.04 sec
Start 118: unittest_hostname
112/228 Test #118: unittest_hostname ......................... Passed 0.23 sec
Start 119: unittest_iso_8601
113/228 Test #117: unittest_back_trace ....................... Passed 0.67 sec
Start 120: unittest_convenience
114/228 Test #119: unittest_iso_8601 ......................... Passed 0.23 sec
Start 121: unittest_bounded_key_counter
115/228 Test #120: unittest_convenience ...................... Passed 0.50 sec
Start 122: unittest_split
116/228 Test #121: unittest_bounded_key_counter .............. Passed 0.02 sec
Start 123: unittest_static_ptr
117/228 Test #123: unittest_static_ptr ....................... Passed 0.02 sec
Start 124: unittest_hobject
118/228 Test #124: unittest_hobject .......................... Passed 0.01 sec
Start 125: unittest_async_completion
119/228 Test #122: unittest_split ............................ Passed 0.10 sec
Start 126: unittest_async_shared_mutex
120/228 Test #125: unittest_async_completion ................. Passed 0.19 sec
Start 127: unittest_cdc
121/228 Test #126: unittest_async_shared_mutex ............... Passed 0.62 sec
Start 128: unittest_ceph_timer
122/228 Test #127: unittest_cdc .............................. Passed 1.30 sec
Start 129: unittest_option
123/228 Test #129: unittest_option ........................... Passed 0.08 sec
Start 130: unittest_blocked_completion
124/228 Test #130: unittest_blocked_completion ............... Passed 0.20 sec
Start 131: unittest_allocate_unique
125/228 Test #131: unittest_allocate_unique .................. Passed 0.16 sec
Start 132: unittest_journald_logger
126/228 Test #132: unittest_journald_logger .................. Passed 0.11 sec
Start 133: unittest_compression
127/228 Test #96: unittest_throttle ......................... Passed 15.19 sec
Start 134: unittest_crush_wrapper
128/228 Test #134: unittest_crush_wrapper .................... Passed 0.04 sec
Start 135: unittest_crush
129/228 Test #135: unittest_crush ............................ Passed 4.12 sec
Start 136: crush_weights.sh
130/228 Test #136: crush_weights.sh .......................... Passed 1.48 sec
Start 137: check-generated.sh
131/228 Test #128: unittest_ceph_timer ....................... Passed 12.18 sec
Start 138: unittest_librados
132/228 Test #138: unittest_librados ......................... Passed 0.32 sec
Start 139: unittest_librados_config
133/228 Test #139: unittest_librados_config .................. Passed 0.01 sec
Start 140: unittest_mds_authcap
134/228 Test #140: unittest_mds_authcap ...................... Passed 0.01 sec
Start 141: unittest_mds_sessionfilter
135/228 Test #141: unittest_mds_sessionfilter ................ Passed 0.04 sec
Start 142: test_ceph_daemon.py
136/228 Test #133: unittest_compression ...................... Passed 11.76 sec
Start 143: test_ceph_argparse.py
137/228 Test #142: test_ceph_daemon.py ....................... Passed 1.42 sec
Start 144: unittest_journal
138/228 Test #144: unittest_journal .......................... Passed 2.81 sec
Start 145: unittest_erasure_code_plugin
139/228 Test #145: unittest_erasure_code_plugin .............. Passed 16.51 sec
Start 146: unittest_erasure_code
140/228 Test #146: unittest_erasure_code ..................... Passed 0.01 sec
Start 147: unittest_erasure_code_plugin_jerasure
141/228 Test #147: unittest_erasure_code_plugin_jerasure ..... Passed 0.13 sec
Start 148: unittest_erasure_code_lrc
142/228 Test #148: unittest_erasure_code_lrc ................. Passed 0.07 sec
Start 149: unittest_erasure_code_plugin_lrc
143/228 Test #149: unittest_erasure_code_plugin_lrc .......... Passed 0.02 sec
Start 150: unittest_erasure_code_plugin_shec
144/228 Test #150: unittest_erasure_code_plugin_shec ......... Passed 0.03 sec
Start 151: unittest_erasure_code_example
145/228 Test #151: unittest_erasure_code_example ............. Passed 0.01 sec
Start 152: unittest_erasure_code_jerasure
146/228 Test #152: unittest_erasure_code_jerasure ............ Passed 0.02 sec
Start 153: unittest_erasure_code_shec
147/228 Test #143: test_ceph_argparse.py ..................... Passed 21.04 sec
Start 154: unittest_erasure_code_shec_all
148/228 Test #153: unittest_erasure_code_shec ................ Passed 5.03 sec
Start 155: unittest_erasure_code_shec_thread
149/228 Test #48: unittest_bufferlist ....................... Passed 75.12 sec
Start 156: unittest_erasure_code_shec_arguments
150/228 Test #156: unittest_erasure_code_shec_arguments ...... Passed 0.09 sec
Start 157: unittest_erasure_code_clay
151/228 Test #157: unittest_erasure_code_clay ................ Passed 0.02 sec
Start 158: unittest_erasure_code_plugin_clay
152/228 Test #158: unittest_erasure_code_plugin_clay ......... Passed 0.02 sec
Start 159: unittest_mds_types
153/228 Test #159: unittest_mds_types ........................ Passed 0.01 sec
Start 160: unittest_mon_moncap
154/228 Test #160: unittest_mon_moncap ....................... Passed 0.02 sec
Start 161: unittest_mon_monmap
155/228 Test #161: unittest_mon_monmap ....................... Passed 0.05 sec
Start 162: unittest_mon_pgmap
156/228 Test #162: unittest_mon_pgmap ........................ Passed 0.01 sec
Start 163: unittest_mon_montypes
157/228 Test #163: unittest_mon_montypes ..................... Passed 0.01 sec
Start 164: unittest_mon_election
158/228 Test #164: unittest_mon_election ..................... Passed 0.10 sec
Start 165: unittest_mgr_mgrcap
159/228 Test #165: unittest_mgr_mgrcap ....................... Passed 0.01 sec
Start 166: unittest_mgr_ttlcache
160/228 Test #166: unittest_mgr_ttlcache ..................... Passed 0.08 sec
Start 167: mgr-dashboard-frontend-unittests
161/228 Test #7: run-tox-mgr-dashboard-py3 ................. Passed 265.31 sec
Start 6: teardown-venv-for-mgr-dashboard-py3
162/228 Test #6: teardown-venv-for-mgr-dashboard-py3 ....... Passed 0.16 sec
Start 168: unittest_frames_v2
163/228 Test #168: unittest_frames_v2 ........................ Passed 0.02 sec
Start 169: unittest_comp_registry
164/228 Test #169: unittest_comp_registry .................... Passed 0.11 sec
Start 170: unittest_chain_xattr
165/228 Test #170: unittest_chain_xattr ...................... Passed 0.07 sec
Start 171: unittest_rocksdb_option
166/228 Test #171: unittest_rocksdb_option ................... Passed 0.12 sec
Start 172: unittest_alloc
167/228 Test #172: unittest_alloc ............................ Passed 0.10 sec
Start 173: unittest_fastbmap_allocator
168/228 Test #173: unittest_fastbmap_allocator ............... Passed 5.88 sec
Start 174: unittest_hybrid_allocator
169/228 Test #174: unittest_hybrid_allocator ................. Passed 0.02 sec
Start 175: unittest_bluefs
170/228 Test #155: unittest_erasure_code_shec_thread ......... Passed 63.09 sec
Start 176: unittest_bluefs_large_write_1
171/228 Test #137: check-generated.sh ........................ Passed 93.83 sec
Start 177: unittest_bluefs_large_write_2
172/228 Test #154: unittest_erasure_code_shec_all ............ Passed 144.82 sec
Start 178: unittest_bluefs_flush_1
173/228 Test #177: unittest_bluefs_large_write_2 ............. Passed 207.43 sec
Start 179: unittest_bluefs_flush_2
174/228 Test #228: run-tox-qa ................................ Passed 765.92 sec
Start 180: unittest_bluefs_flush_3
175/228 Test #176: unittest_bluefs_large_write_1 ............. Passed 548.92 sec
Start 181: unittest_bluefs_compact_sync
176/228 Test #4: run-tox-mgr ............................... Passed 1393.66 sec
Start 3: teardown-venv-for-mgr
177/228 Test #3: teardown-venv-for-mgr ..................... Passed 0.74 sec
Start 182: unittest_bluefs_compact_async
178/228 Test #167: mgr-dashboard-frontend-unittests .......... Passed 1270.10 sec
Start 183: unittest_bluefs_replay
179/228 Test #175: unittest_bluefs ........................... Passed 1465.62 sec
Start 184: unittest_bluefs_replay_growth
180/228 Test #178: unittest_bluefs_flush_1 ................... Passed 1510.12 sec
Start 185: unittest_bluefs_regression
181/228 Test #185: unittest_bluefs_regression ................ Passed 147.77 sec
Start 186: unittest_bluestore_types
182/228 Test #186: unittest_bluestore_types .................. Passed 14.02 sec
Start 187: unittest_bdev
183/228 Test #187: unittest_bdev ............................. Passed 64.47 sec
Start 188: unittest_deferred
184/228 Test #179: unittest_bluefs_flush_2 ................... Passed 1631.79 sec
Start 189: unittest_transaction
185/228 Test #188: unittest_deferred ......................... Passed 61.25 sec
Start 190: unittest_memstore_clone
186/228 Test #189: unittest_transaction ...................... Passed 0.79 sec
Start 191: ceph_test_object_map
187/228 Test #191: ceph_test_object_map ...................... Passed 8.37 sec
Start 192: unittest_lfnindex
188/228 Test #192: unittest_lfnindex ......................... Passed 0.38 sec
Start 193: unittest_osdmap
189/228 Test #190: unittest_memstore_clone ................... Passed 13.25 sec
Start 194: unittest_osd_types
190/228 Test #194: unittest_osd_types ........................ Passed 0.17 sec
Start 195: unittest_ecbackend
191/228 Test #195: unittest_ecbackend ........................ Passed 0.50 sec
Start 196: unittest_osdscrub
192/228 Test #196: unittest_osdscrub ......................... Passed 0.91 sec
Start 197: unittest_pglog
193/228 Test #193: unittest_osdmap ........................... Passed 6.78 sec
Start 198: unittest_hitset
194/228 Test #198: unittest_hitset ........................... Passed 0.20 sec
Start 199: unittest_osd_osdcap
195/228 Test #199: unittest_osd_osdcap ....................... Passed 0.15 sec
Start 200: unittest_extent_cache
196/228 Test #200: unittest_extent_cache ..................... Passed 0.02 sec
Start 201: unittest_pg_transaction
197/228 Test #201: unittest_pg_transaction ................... Passed 0.02 sec
Start 202: unittest_ec_transaction
198/228 Test #202: unittest_ec_transaction ................... Passed 0.12 sec
Start 203: unittest_mclock_scheduler
199/228 Test #203: unittest_mclock_scheduler ................. Passed 0.73 sec
Start 204: unittest_ceph_immutable_obj_cache
200/228 Test #204: unittest_ceph_immutable_obj_cache ......... Passed 0.27 sec
Start 205: unittest_rgw_bencode
201/228 Test #205: unittest_rgw_bencode ...................... Passed 0.18 sec
Start 206: unittest_rgw_bucket_sync_cache
202/228 Test #206: unittest_rgw_bucket_sync_cache ............ Passed 0.08 sec
Start 207: unittest_rgw_period_history
203/228 Test #207: unittest_rgw_period_history ............... Passed 0.93 sec
Start 208: unittest_rgw_compression
204/228 Test #208: unittest_rgw_compression .................. Passed 8.09 sec
Start 209: unittest_http_manager
205/228 Test #209: unittest_http_manager ..................... Passed 0.41 sec
Start 210: unittest_rgw_reshard_wait
206/228 Test #210: unittest_rgw_reshard_wait ................. Passed 0.59 sec
Start 211: unittest_rgw_ratelimit
207/228 Test #211: unittest_rgw_ratelimit .................... Passed 1.02 sec
Start 212: unittest_rgw_crypto
208/228 Test #181: unittest_bluefs_compact_sync .............. Passed 1355.91 sec
Start 213: unittest_rgw_reshard
209/228 Test #213: unittest_rgw_reshard ...................... Passed 0.73 sec
Start 214: unittest_rgw_putobj
210/228 Test #214: unittest_rgw_putobj ....................... Passed 0.03 sec
Start 215: unittest_rgw_iam_policy
211/228 Test #212: unittest_rgw_crypto ....................... Passed 1.39 sec
Start 216: unittest_rgw_string
212/228 Test #216: unittest_rgw_string ....................... Passed 0.02 sec
Start 217: unittest_rgw_dmclock_scheduler
213/228 Test #217: unittest_rgw_dmclock_scheduler ............ Passed 0.14 sec
Start 218: unittest_rgw_amqp
214/228 Test #215: unittest_rgw_iam_policy ................... Passed 0.89 sec
Start 219: unittest_rgw_xml
215/228 Test #219: unittest_rgw_xml .......................... Passed 0.15 sec
Start 220: unittest_rgw_arn
216/228 Test #220: unittest_rgw_arn .......................... Passed 0.68 sec
Start 221: unittest_rgw_kms
217/228 Test #221: unittest_rgw_kms .......................... Passed 0.71 sec
Start 222: unittest_rgw_url
218/228 Test #222: unittest_rgw_url .......................... Passed 0.34 sec
Start 223: test-ceph-diff-sorted.sh
219/228 Test #223: test-ceph-diff-sorted.sh .................. Passed 0.24 sec
Start 224: unittest_rgw_lua
220/228 Test #218: unittest_rgw_amqp ......................... Passed 5.35 sec
Start 225: unittest_rbd_mirror
221/228 Test #197: unittest_pglog ............................ Passed 25.50 sec
Start 227: teardown-venv-for-qa
222/228 Test #227: teardown-venv-for-qa ...................... Passed 0.11 sec
223/228 Test #224: unittest_rgw_lua .......................... Passed 8.67 sec
224/228 Test #183: unittest_bluefs_replay .................... Passed 762.70 sec
225/228 Test #182: unittest_bluefs_compact_async ............. Passed 864.57 sec
226/228 Test #225: unittest_rbd_mirror ....................... Passed 152.89 sec
227/228 Test #184: unittest_bluefs_replay_growth ............. Passed 775.94 sec
228/228 Test #180: unittest_bluefs_flush_3 ................... Passed 2118.82 sec

100% tests passed, 0 tests failed out of 228

Total Test time (real) = 3038.84 sec

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants