New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

os/bluestore: add new garbage collector #12144

Merged
merged 11 commits into from Feb 16, 2017

Conversation

Projects
None yet
3 participants
@ifed01
Contributor

ifed01 commented Nov 22, 2016

This is a final version version that estimates how many allocation units one can save if uncompress overlapped blob(s) and save them in raw format.
Rebased on top of PR #12904

@ifed01 ifed01 changed the title from [RFC]os/bluestore: add new garbage collector to os/bluestore: add new garbage collector Dec 5, 2016

@liewegas

Some nits here, but I think we can do a bit better if we move this logic into _wctx_finish, after we put_ref. That way we can look at the final ref_map instead of making a full copy inside the GC class.

(Also, I think this will change somewhat once we have a different/more compact representation of the ref_map...)

Show outdated Hide outdated src/os/bluestore/BlueStore.h
* per all blobsb to enable compressed blobs garbage collection
*
*/
OPTION(bluestore_gc_enable_total_threshold, OPT_INT, 0)

This comment has been minimized.

@liewegas

liewegas Dec 22, 2016

Member

Do we really want defaults at 0? Doesn't that mean aggressively collect even if there is only a tiny benefit?

@liewegas

liewegas Dec 22, 2016

Member

Do we really want defaults at 0? Doesn't that mean aggressively collect even if there is only a tiny benefit?

This comment has been minimized.

@ifed01

ifed01 Jan 23, 2017

Contributor

My considerations on that:

  1. There is no need to store data compressed if this provides no storage saving.
  2. This way we'll have data layout more consistent with the original write handling. No saving due to compression - no compressed blobs written.
    Any other suggestions?
@ifed01

ifed01 Jan 23, 2017

Contributor

My considerations on that:

  1. There is no need to store data compressed if this provides no storage saving.
  2. This way we'll have data layout more consistent with the original write handling. No saving due to compression - no compressed blobs written.
    Any other suggestions?
Show outdated Hide outdated src/os/bluestore/BlueStore.h
@liewegas

This comment has been minimized.

Show comment
Hide comment
@liewegas

liewegas Dec 22, 2016

Member

BTW few style nits for this PR and others:

// double-slash comments have a space
//not like this

// comparison operators have spaces, like so
if (a >= b) ;
// not
if (a >=b) ;
// same with braces...
void func(int foo) const {
// not
void func(int foo) const{
// etc.

// 80 columns please
Member

liewegas commented Dec 22, 2016

BTW few style nits for this PR and others:

// double-slash comments have a space
//not like this

// comparison operators have spaces, like so
if (a >= b) ;
// not
if (a >=b) ;
// same with braces...
void func(int foo) const {
// not
void func(int foo) const{
// etc.

// 80 columns please
@ifed01

This comment has been minimized.

Show comment
Hide comment
@ifed01

ifed01 Jan 24, 2017

Contributor

@liewegas - resolved and rebased. Please take a look.

Contributor

ifed01 commented Jan 24, 2017

@liewegas - resolved and rebased. Please take a look.

@ifed01

This comment has been minimized.

Show comment
Hide comment
@ifed01

ifed01 Jan 24, 2017

Contributor

W.r.t. to do GC at _wctx_finish - I refactored a code to avoid ref_map copying but left GC at the original location to avoid the need for duplicate do_write_data/do_alloc_write/_wctx_finish call sequence for garbage collected data. Still any objections against that?

Contributor

ifed01 commented Jan 24, 2017

W.r.t. to do GC at _wctx_finish - I refactored a code to avoid ref_map copying but left GC at the original location to avoid the need for duplicate do_write_data/do_alloc_write/_wctx_finish call sequence for garbage collected data. Still any objections against that?

@ifed01

This comment has been minimized.

Show comment
Hide comment
@ifed01

ifed01 Jan 25, 2017

Contributor

@liewegas - resolved your comments.

Contributor

ifed01 commented Jan 25, 2017

@liewegas - resolved your comments.

@liewegas

This comment has been minimized.

Show comment
Hide comment
@liewegas

liewegas Jan 25, 2017

Member

can you rebase on master please?

Member

liewegas commented Jan 25, 2017

can you rebase on master please?

@ifed01

This comment has been minimized.

Show comment
Hide comment
@ifed01

ifed01 Jan 25, 2017

Contributor

Already rebased

Contributor

ifed01 commented Jan 25, 2017

Already rebased

@liewegas

This comment has been minimized.

Show comment
Hide comment
@liewegas

liewegas Jan 25, 2017

Member
Member

liewegas commented Jan 25, 2017

Igor Fedotov added some commits Nov 22, 2016

Igor Fedotov
os/bluestore: add new garbage collector
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
os/bluestore: add test case for GC
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
os/bluestore: add operator== to AllocExtent structure
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
os/bluestore: add a config parameter to control garbage collection
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
os/bluestore: add performance counter for garbage collector
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
os/objectstore: remove unused logger member
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
os/objectstore: add access to objectstore's performance counters from UT
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
test/store_test: add garbage collector test case
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
os/bluestore: replace Blob's ref_map with reference counting
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
os/bluestore: raise ExtentMap/Blob encoding version to handle migrati…
…on from ref_map to ref counting properly.

Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Igor Fedotov
test/objectstore: fix incomplete restore to original settings in stor…
…e_test.OnodeSizeTracking

Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>

@liewegas liewegas added the needs-qa label Feb 6, 2017

@yuriw

This comment has been minimized.

Show comment
Hide comment
@ifed01

This comment has been minimized.

Show comment
Hide comment
@ifed01

ifed01 Feb 14, 2017

Contributor

Made brief analysis for @yuriw report. I don't see any bluestore/GC related issues there.

799447 - unrelated to bluestore: filestore failure

[ RUN ] ObjectStore/StoreTest.Synthetic/1
2017-02-09T02:00:34.485 INFO:teuthology.orchestra.run.smithi194.stderr:seeding object 0
2017-02-09T02:00:35.889 INFO:teuthology.orchestra.run.smithi194.stderr:seeding object 500
2017-02-09T02:00:36.425 INFO:teuthology.orchestra.run.smithi194.stderr:Op 0
2017-02-09T02:00:36.425 INFO:teuthology.orchestra.run.smithi194.stderr:available_objects: 985 in_flight_objects: 15 total objects: 1000 in_flight 15
2017-02-09T02:00:44.702 INFO:teuthology.orchestra.run.smithi194.stderr:2017-02-09 02:00:44.704101 7f2ca6b77a40 1 journal close store_test_temp_journal
2017-02-09T02:00:44.727 INFO:teuthology.orchestra.run.smithi194.stderr:ceph_test_objectstore: /build/ceph-12.0.0-146-g1c274c8/src/test/objectstore/store_test.cc:3867: void SyntheticWorkloadState::fsck(bool): Assertion `r == 0' failed.
2017-02-09T02:00:44.728 INFO:teuthology.orchestra.run.smithi194.stderr:*** Caught signal (Aborted) **
2017-02-09T02:00:44.728 INFO:teuthology.orchestra.run.smithi194.stderr: in thread 7f2ca6b77a40 thread_name:ceph_test_objec
2017-02-09T02:00:44.776 INFO:teuthology.orchestra.run.smithi194.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8)
2017-02-09T02:00:44.776 INFO:teuthology.orchestra.run.smithi194.stderr: 1: (()+0x49f562) [0x55e91f4b6562]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 2: (()+0x11390) [0x7f2ca6766390]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 3: (gsignal()+0x38) [0x7f2c9c21e428]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 4: (abort()+0x16a) [0x7f2c9c22002a]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 5: (()+0x2dbd7) [0x7f2c9c216bd7]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 6: (()+0x2dc82) [0x7f2c9c216c82]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 7: (doSyntheticTest(boost::scoped_ptr&, int, unsigned long, unsigned long, unsigned long)+0x1f14) [0x55e91f20a894]

799471 - unrelated to bluestore?

2017-02-09T03:37:59.502 INFO:teuthology.run:Summary data:
{description: 'rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml}
1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml
5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml}
7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade.yaml 9-workload/{rbd-python.yaml
rgw-swift.yaml snaps-many-objects.yaml}}', duration: 6242.836159944534, failure_reason: '''wait_until_healthy''
reached maximum tries (150) after waiting for 900 seconds', flavor: basic, owner: scheduled_yuriw@teuthology,
sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=55623407d6064a718f77dc70c7f95d6f',
status: fail, success: false}

799507 - seems to be unrelated to bluestore/garbage collector

2017-02-09T02:59:00.225 INFO:teuthology.orchestra.run.smithi139.stderr:rm: cannot remove т<80><98>/var/lib/cephт<80><99>: No such file or directory
2017-02-09T02:59:00.226 INFO:teuthology.orchestra.run.smithi139:Running: 'sudo rm -r /var/log/ceph'
2017-02-09T02:59:00.278 INFO:teuthology.orchestra.run.smithi139.stderr:rm: cannot remove т<80><98>/var/log/cephт<80><99>: No such file or directory
2017-02-09T02:59:00.279 INFO:teuthology.orchestra.run.smithi139:Running: 'sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf'
2017-02-09T02:59:00.327 INFO:teuthology.orchestra.run.smithi139.stderr:mv: cannot stat т<80><98>/etc/yum/pluginconf.d/priorities.conf.origт<80><99>: No such file or directory
2017-02-09T02:59:00.328 DEBUG:teuthology.parallel:result is None
2017-02-09T02:59:00.328 INFO:teuthology.nuke:Installed packages removed.
2017-02-09T02:59:00.369 INFO:teuthology.lock:unlocked smithi139.front.sepia.ceph.com
2017-02-09T02:59:00.383 INFO:teuthology.run:Summary data:
{description: 'rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml}
fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml
objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}',
duration: 3257.2218351364136, failure_reason: 'Command failed (workunit test rados/test.sh)
on smithi003 with status 1: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
&& cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=1c274c80d956d0912f3295395008b43c1ce45620 TESTDIR="/home/ubuntu/cephtest"
CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0
adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h
/home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh''', flavor: notcmalloc,
owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=99ad5a1b336546c4956facf801e3404c',
status: fail, success: false}

2017-02-09T02:59:00.383 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:59:00.449 INFO:teuthology.run:FAIL

799518 - filestore related?

2017-02-09T02:31:18.714 INFO:teuthology.run:Summary data:
{description: 'rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml
clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml
msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml
workloads/cache-agent-big.yaml}', duration: 1474.646087884903, failure_reason: '"2017-02-09
02:06:45.974040 mon.2 172.21.15.104:6790/0 3 : cluster [WRN] message from mon.0
was stamped 9.783507s in the future, clocks not synchronized" in cluster log',
flavor: basic, owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=4253a56eed834247bfd909b142992d69',
status: fail, success: false}

2017-02-09T02:31:18.714 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:31:18.772 INFO:teuthology.run:FAIL

799574: bluestore/garbage collector unrelated

2017-02-09T02:25:55.661 INFO:tasks.workunit.client.0.smithi168.stdout:[ RUN ] EnvLibradosMutipoolTest.DBBasics
2017-02-09T02:26:05.669 INFO:tasks.workunit.client.0.smithi168.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: In func
tion 'void ceph::crypto::init(CephContext*)' thread 7f132bc11680 time 2017-02-09 02:26:05.669712
2017-02-09T02:26:05.669 INFO:tasks.workunit.client.0.smithi168.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: 77: FAI
LED assert(crypto_context != __null)
2017-02-09T02:26:05.685 INFO:tasks.workunit.client.0.smithi168.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8
c1ce45620)
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x10e) [0x7f1321876bbe]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 2: (ceph::crypto::shutdown()+0) [0x7f1321a4bbb0]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 3: (CephContext::init_crypto()+0x15) [0x7f13219fa105]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 4: (common_init_finish(CephContext*)+0x10) [0x7f13219f6dd0]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 5: (librados::RadosClient::connect()+0x1d) [0x7f132b76708d]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 6: ./env_librados_test() [0x4df159]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 7: ./env_librados_test() [0x4df638]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 8: ./env_librados_test() [0x783fb9]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 9: ./env_librados_test() [0x77701a]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 10: ./env_librados_test() [0x777117]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 11: ./env_librados_test() [0x77739d]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 12: ./env_librados_test() [0x777683]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 13: ./env_librados_test() [0x410ae1]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 14: (__libc_start_main()+0xf5) [0x7f132a0b6f45]
2017-02-09T02:26:05.688 INFO:tasks.workunit.client.0.smithi168.stderr: 15: ./env_librados_test() [0x4ca0da]

799751 : bluestore unrelated?

017-02-09T06:43:45.521 INFO:teuthology.run:Summary data:
{description: 'rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml}
1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml
5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml}
7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade.yaml 9-workload/{rbd-python.yaml
rgw-swift.yaml snaps-many-objects.yaml}}', duration: 6203.694232940674, failure_reason: '''wait_until_healthy''
reached maximum tries (150) after waiting for 900 seconds', flavor: basic, owner: scheduled_yuriw@teuthology,
sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=7df6c08c08b7485f865e7a99d90798c2',
status: fail, success: false}

2017-02-09T06:43:45.522 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T06:43:45.573 INFO:teuthology.run:FAIL

799755 - unrelated to bluestore:

[ RUN ] EnvLibradosMutipoolTest.DBBulkLoadKeysInRandomOrder
2017-02-09T05:05:32.358 INFO:tasks.workunit.client.0.smithi204.stdout:Test size : loop(64); bulk_size(32768)
2017-02-09T05:05:53.932 INFO:tasks.workunit.client.0.smithi204.stdout:Time by default : 7489ms
2017-02-09T05:06:03.417 INFO:tasks.workunit.client.0.smithi204.stdout:Time by librados : 9427ms
2017-02-09T05:06:05.326 INFO:tasks.workunit.client.0.smithi204.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: In function 'void ceph::crypto::init(CephContext*)' thread 7fc88a177700 time 2017-02-09 05:06:05.327386
2017-02-09T05:06:05.326 INFO:tasks.workunit.client.0.smithi204.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: 77: FAILED assert(crypto_context != __null)
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8)
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fc87fcfbc42]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 2: (ceph::crypto::init(CephContext*)+0x104) [0x7fc87ff01734]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 3: (CephContext::init_crypto()+0x19) [0x7fc87feaa829]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 4: (common_init_finish(CephContext*)+0x10) [0x7fc87fea7700]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 5: (librados::RadosClient::connect()+0x30) [0x7fc889cc8860]

799758 - unrelated to bluestore: filestore failure

[ RUN ] ObjectStore/StoreTest.Synthetic/1
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852354 7f3f19300a40 1 filestore(filestore.test_temp_dir) leveldb db exists/created
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852459 7f3f19300a40 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852467 7f3f19300a40 1 journal _open store_test_temp_journal fd 6: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868591 7f3f19300a40 -1 journal check: ondisk fsid 21e1bc96-5015-4334-9c2a-4336bf84b7ee doesn't match expected 30e938ea-85d8-456d-9aee-69fefaa993fd, invalid (someone else's?) journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868605 7f3f19300a40 1 journal close store_test_temp_journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868629 7f3f19300a40 1 journal _open store_test_temp_journal fd 6: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.885642 7f3f19300a40 0 filestore(filestore.test_temp_dir) mkjournal created journal on store_test_temp_journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927676 7f3f19300a40 1 filestore(filestore.test_temp_dir) mkfs done in filestore.test_temp_dir
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927779 7f3f19300a40 0 filestore(filestore.test_temp_dir) backend generic (magic 0xef53)
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927788 7f3f19300a40 -1 filestore(filestore.test_temp_dir) WARNING: max attr value size (1024) is smaller than osd_max_object_name_len (2048). Your backend filesystem appears to not support attrs large enough to handle the configured max rados name size. You may get unexpected ENAMETOOLONG errors on rados operations or buggy behavior
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928079 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928085 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928087 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: splice() is disabled via 'filestore splice' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.046987 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.195049 7f3f19300a40 0 filestore(filestore.test_temp_dir) limited size xattrs
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.195304 7f3f19300a40 0 filestore(filestore.test_temp_dir) start omap initiation
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279225 7f3f19300a40 0 filestore(filestore.test_temp_dir) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279336 7f3f19300a40 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279340 7f3f19300a40 1 journal _open store_test_temp_journal fd 56: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279723 7f3f19300a40 1 journal _open store_test_temp_journal fd 56: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.280000 7f3f19300a40 1 filestore(filestore.test_temp_dir) upgrade
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:seeding object 0
2017-02-09T05:12:41.801 INFO:teuthology.orchestra.run.smithi113.stderr:seeding object 500
2017-02-09T05:12:42.347 INFO:teuthology.orchestra.run.smithi113.stderr:Op 0
2017-02-09T05:12:42.347 INFO:teuthology.orchestra.run.smithi113.stderr:available_objects: 984 in_flight_objects: 16 total objects: 1000 in_flight 16

799610: bluestore unrelated?

2017-02-09T02:54:13.211 INFO:teuthology.run:Summary data:
{description: 'rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml
clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml
msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml
workloads/admin_socket_objecter_requests.yaml}', duration: 1424.7001299858093,
failure_reason: '"2017-02-09 02:30:12.560656 mon.1 172.21.15.165:6789/0 2 : cluster
[WRN] message from mon.0 was stamped 9.243840s in the future, clocks not synchronized"
in cluster log', flavor: basic, owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=52a4877e2a134f1ca9b7c3f0fa3af132',
status: fail, success: false}

2017-02-09T02:54:13.212 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:54:13.265 INFO:teuthology.run:FAIL

Contributor

ifed01 commented Feb 14, 2017

Made brief analysis for @yuriw report. I don't see any bluestore/GC related issues there.

799447 - unrelated to bluestore: filestore failure

[ RUN ] ObjectStore/StoreTest.Synthetic/1
2017-02-09T02:00:34.485 INFO:teuthology.orchestra.run.smithi194.stderr:seeding object 0
2017-02-09T02:00:35.889 INFO:teuthology.orchestra.run.smithi194.stderr:seeding object 500
2017-02-09T02:00:36.425 INFO:teuthology.orchestra.run.smithi194.stderr:Op 0
2017-02-09T02:00:36.425 INFO:teuthology.orchestra.run.smithi194.stderr:available_objects: 985 in_flight_objects: 15 total objects: 1000 in_flight 15
2017-02-09T02:00:44.702 INFO:teuthology.orchestra.run.smithi194.stderr:2017-02-09 02:00:44.704101 7f2ca6b77a40 1 journal close store_test_temp_journal
2017-02-09T02:00:44.727 INFO:teuthology.orchestra.run.smithi194.stderr:ceph_test_objectstore: /build/ceph-12.0.0-146-g1c274c8/src/test/objectstore/store_test.cc:3867: void SyntheticWorkloadState::fsck(bool): Assertion `r == 0' failed.
2017-02-09T02:00:44.728 INFO:teuthology.orchestra.run.smithi194.stderr:*** Caught signal (Aborted) **
2017-02-09T02:00:44.728 INFO:teuthology.orchestra.run.smithi194.stderr: in thread 7f2ca6b77a40 thread_name:ceph_test_objec
2017-02-09T02:00:44.776 INFO:teuthology.orchestra.run.smithi194.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8)
2017-02-09T02:00:44.776 INFO:teuthology.orchestra.run.smithi194.stderr: 1: (()+0x49f562) [0x55e91f4b6562]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 2: (()+0x11390) [0x7f2ca6766390]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 3: (gsignal()+0x38) [0x7f2c9c21e428]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 4: (abort()+0x16a) [0x7f2c9c22002a]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 5: (()+0x2dbd7) [0x7f2c9c216bd7]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 6: (()+0x2dc82) [0x7f2c9c216c82]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 7: (doSyntheticTest(boost::scoped_ptr&, int, unsigned long, unsigned long, unsigned long)+0x1f14) [0x55e91f20a894]

799471 - unrelated to bluestore?

2017-02-09T03:37:59.502 INFO:teuthology.run:Summary data:
{description: 'rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml}
1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml
5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml}
7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade.yaml 9-workload/{rbd-python.yaml
rgw-swift.yaml snaps-many-objects.yaml}}', duration: 6242.836159944534, failure_reason: '''wait_until_healthy''
reached maximum tries (150) after waiting for 900 seconds', flavor: basic, owner: scheduled_yuriw@teuthology,
sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=55623407d6064a718f77dc70c7f95d6f',
status: fail, success: false}

799507 - seems to be unrelated to bluestore/garbage collector

2017-02-09T02:59:00.225 INFO:teuthology.orchestra.run.smithi139.stderr:rm: cannot remove т<80><98>/var/lib/cephт<80><99>: No such file or directory
2017-02-09T02:59:00.226 INFO:teuthology.orchestra.run.smithi139:Running: 'sudo rm -r /var/log/ceph'
2017-02-09T02:59:00.278 INFO:teuthology.orchestra.run.smithi139.stderr:rm: cannot remove т<80><98>/var/log/cephт<80><99>: No such file or directory
2017-02-09T02:59:00.279 INFO:teuthology.orchestra.run.smithi139:Running: 'sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf'
2017-02-09T02:59:00.327 INFO:teuthology.orchestra.run.smithi139.stderr:mv: cannot stat т<80><98>/etc/yum/pluginconf.d/priorities.conf.origт<80><99>: No such file or directory
2017-02-09T02:59:00.328 DEBUG:teuthology.parallel:result is None
2017-02-09T02:59:00.328 INFO:teuthology.nuke:Installed packages removed.
2017-02-09T02:59:00.369 INFO:teuthology.lock:unlocked smithi139.front.sepia.ceph.com
2017-02-09T02:59:00.383 INFO:teuthology.run:Summary data:
{description: 'rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml}
fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml
objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}',
duration: 3257.2218351364136, failure_reason: 'Command failed (workunit test rados/test.sh)
on smithi003 with status 1: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
&& cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=1c274c80d956d0912f3295395008b43c1ce45620 TESTDIR="/home/ubuntu/cephtest"
CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0
adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h
/home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh''', flavor: notcmalloc,
owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=99ad5a1b336546c4956facf801e3404c',
status: fail, success: false}

2017-02-09T02:59:00.383 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:59:00.449 INFO:teuthology.run:FAIL

799518 - filestore related?

2017-02-09T02:31:18.714 INFO:teuthology.run:Summary data:
{description: 'rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml
clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml
msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml
workloads/cache-agent-big.yaml}', duration: 1474.646087884903, failure_reason: '"2017-02-09
02:06:45.974040 mon.2 172.21.15.104:6790/0 3 : cluster [WRN] message from mon.0
was stamped 9.783507s in the future, clocks not synchronized" in cluster log',
flavor: basic, owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=4253a56eed834247bfd909b142992d69',
status: fail, success: false}

2017-02-09T02:31:18.714 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:31:18.772 INFO:teuthology.run:FAIL

799574: bluestore/garbage collector unrelated

2017-02-09T02:25:55.661 INFO:tasks.workunit.client.0.smithi168.stdout:[ RUN ] EnvLibradosMutipoolTest.DBBasics
2017-02-09T02:26:05.669 INFO:tasks.workunit.client.0.smithi168.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: In func
tion 'void ceph::crypto::init(CephContext*)' thread 7f132bc11680 time 2017-02-09 02:26:05.669712
2017-02-09T02:26:05.669 INFO:tasks.workunit.client.0.smithi168.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: 77: FAI
LED assert(crypto_context != __null)
2017-02-09T02:26:05.685 INFO:tasks.workunit.client.0.smithi168.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8
c1ce45620)
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x10e) [0x7f1321876bbe]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 2: (ceph::crypto::shutdown()+0) [0x7f1321a4bbb0]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 3: (CephContext::init_crypto()+0x15) [0x7f13219fa105]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 4: (common_init_finish(CephContext*)+0x10) [0x7f13219f6dd0]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 5: (librados::RadosClient::connect()+0x1d) [0x7f132b76708d]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 6: ./env_librados_test() [0x4df159]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 7: ./env_librados_test() [0x4df638]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 8: ./env_librados_test() [0x783fb9]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 9: ./env_librados_test() [0x77701a]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 10: ./env_librados_test() [0x777117]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 11: ./env_librados_test() [0x77739d]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 12: ./env_librados_test() [0x777683]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 13: ./env_librados_test() [0x410ae1]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 14: (__libc_start_main()+0xf5) [0x7f132a0b6f45]
2017-02-09T02:26:05.688 INFO:tasks.workunit.client.0.smithi168.stderr: 15: ./env_librados_test() [0x4ca0da]

799751 : bluestore unrelated?

017-02-09T06:43:45.521 INFO:teuthology.run:Summary data:
{description: 'rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml}
1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml
5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml}
7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade.yaml 9-workload/{rbd-python.yaml
rgw-swift.yaml snaps-many-objects.yaml}}', duration: 6203.694232940674, failure_reason: '''wait_until_healthy''
reached maximum tries (150) after waiting for 900 seconds', flavor: basic, owner: scheduled_yuriw@teuthology,
sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=7df6c08c08b7485f865e7a99d90798c2',
status: fail, success: false}

2017-02-09T06:43:45.522 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T06:43:45.573 INFO:teuthology.run:FAIL

799755 - unrelated to bluestore:

[ RUN ] EnvLibradosMutipoolTest.DBBulkLoadKeysInRandomOrder
2017-02-09T05:05:32.358 INFO:tasks.workunit.client.0.smithi204.stdout:Test size : loop(64); bulk_size(32768)
2017-02-09T05:05:53.932 INFO:tasks.workunit.client.0.smithi204.stdout:Time by default : 7489ms
2017-02-09T05:06:03.417 INFO:tasks.workunit.client.0.smithi204.stdout:Time by librados : 9427ms
2017-02-09T05:06:05.326 INFO:tasks.workunit.client.0.smithi204.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: In function 'void ceph::crypto::init(CephContext*)' thread 7fc88a177700 time 2017-02-09 05:06:05.327386
2017-02-09T05:06:05.326 INFO:tasks.workunit.client.0.smithi204.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: 77: FAILED assert(crypto_context != __null)
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8)
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fc87fcfbc42]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 2: (ceph::crypto::init(CephContext*)+0x104) [0x7fc87ff01734]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 3: (CephContext::init_crypto()+0x19) [0x7fc87feaa829]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 4: (common_init_finish(CephContext*)+0x10) [0x7fc87fea7700]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 5: (librados::RadosClient::connect()+0x30) [0x7fc889cc8860]

799758 - unrelated to bluestore: filestore failure

[ RUN ] ObjectStore/StoreTest.Synthetic/1
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852354 7f3f19300a40 1 filestore(filestore.test_temp_dir) leveldb db exists/created
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852459 7f3f19300a40 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852467 7f3f19300a40 1 journal _open store_test_temp_journal fd 6: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868591 7f3f19300a40 -1 journal check: ondisk fsid 21e1bc96-5015-4334-9c2a-4336bf84b7ee doesn't match expected 30e938ea-85d8-456d-9aee-69fefaa993fd, invalid (someone else's?) journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868605 7f3f19300a40 1 journal close store_test_temp_journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868629 7f3f19300a40 1 journal _open store_test_temp_journal fd 6: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.885642 7f3f19300a40 0 filestore(filestore.test_temp_dir) mkjournal created journal on store_test_temp_journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927676 7f3f19300a40 1 filestore(filestore.test_temp_dir) mkfs done in filestore.test_temp_dir
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927779 7f3f19300a40 0 filestore(filestore.test_temp_dir) backend generic (magic 0xef53)
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927788 7f3f19300a40 -1 filestore(filestore.test_temp_dir) WARNING: max attr value size (1024) is smaller than osd_max_object_name_len (2048). Your backend filesystem appears to not support attrs large enough to handle the configured max rados name size. You may get unexpected ENAMETOOLONG errors on rados operations or buggy behavior
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928079 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928085 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928087 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: splice() is disabled via 'filestore splice' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.046987 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.195049 7f3f19300a40 0 filestore(filestore.test_temp_dir) limited size xattrs
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.195304 7f3f19300a40 0 filestore(filestore.test_temp_dir) start omap initiation
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279225 7f3f19300a40 0 filestore(filestore.test_temp_dir) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279336 7f3f19300a40 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279340 7f3f19300a40 1 journal _open store_test_temp_journal fd 56: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279723 7f3f19300a40 1 journal _open store_test_temp_journal fd 56: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.280000 7f3f19300a40 1 filestore(filestore.test_temp_dir) upgrade
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:seeding object 0
2017-02-09T05:12:41.801 INFO:teuthology.orchestra.run.smithi113.stderr:seeding object 500
2017-02-09T05:12:42.347 INFO:teuthology.orchestra.run.smithi113.stderr:Op 0
2017-02-09T05:12:42.347 INFO:teuthology.orchestra.run.smithi113.stderr:available_objects: 984 in_flight_objects: 16 total objects: 1000 in_flight 16

799610: bluestore unrelated?

2017-02-09T02:54:13.211 INFO:teuthology.run:Summary data:
{description: 'rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml
clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml
msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml
workloads/admin_socket_objecter_requests.yaml}', duration: 1424.7001299858093,
failure_reason: '"2017-02-09 02:30:12.560656 mon.1 172.21.15.165:6789/0 2 : cluster
[WRN] message from mon.0 was stamped 9.243840s in the future, clocks not synchronized"
in cluster log', flavor: basic, owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=52a4877e2a134f1ca9b7c3f0fa3af132',
status: fail, success: false}

2017-02-09T02:54:13.212 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:54:13.265 INFO:teuthology.run:FAIL

@liewegas liewegas merged commit e53af89 into ceph:master Feb 16, 2017

3 checks passed

Signed-off-by all commits in this PR are signed
Details
Unmodifed Submodules submodules for project are unmodified
Details
default Build finished.
Details

@ifed01 ifed01 deleted the ifed01:wip-bluestore-no-blobdepth branch Nov 9, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment