Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

os/bluestore: add new garbage collector #12144

Merged
merged 11 commits into from Feb 16, 2017

Conversation

ifed01
Copy link
Contributor

@ifed01 ifed01 commented Nov 22, 2016

This is a final version version that estimates how many allocation units one can save if uncompress overlapped blob(s) and save them in raw format.
Rebased on top of PR #12904

@ifed01 ifed01 force-pushed the wip-bluestore-no-blobdepth branch 5 times, most recently from 4cc0768 to 588e942 Compare November 29, 2016 14:43
@ifed01 ifed01 force-pushed the wip-bluestore-no-blobdepth branch 4 times, most recently from f67dd94 to 810776f Compare December 5, 2016 15:10
@ifed01 ifed01 changed the title [RFC]os/bluestore: add new garbage collector os/bluestore: add new garbage collector Dec 5, 2016
Copy link
Member

@liewegas liewegas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some nits here, but I think we can do a bit better if we move this logic into _wctx_finish, after we put_ref. That way we can look at the final ref_map instead of making a full copy inside the GC class.

(Also, I think this will change somewhat once we have a different/more compact representation of the ref_map...)

extent1 <loffs = 100, boffs = 100, len = 10> -> blob1<compressed, len_on_disk=4096, logical_len=8192<
extent2 <loffs = 200, boffs = 100, len = 10> -> blob2<raw, len_on_disk=4096, llen=4096>
extent3 <loffs = 300, boffs = 100, len = 10> -> blob1<compressed, len_on_disk=4096, llen=8192>
extent4 <loffs = 4096, boffs = 100, len = 10> -> blob3<raw, len_on_disk=4096, llen=4096>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i assume for these 4 len = 100, not 10?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and for extent3, boffs = 300?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extent lengths doesn't matter much in this example - both 10 and 100 result the same behavior.

* per all blobsb to enable compressed blobs garbage collection
*
*/
OPTION(bluestore_gc_enable_total_threshold, OPT_INT, 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really want defaults at 0? Doesn't that mean aggressively collect even if there is only a tiny benefit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My considerations on that:

  1. There is no need to store data compressed if this provides no storage saving.
  2. This way we'll have data layout more consistent with the original write handling. No saving due to compression - no compressed blobs written.
    Any other suggestions?

class GarbageCollector
{
public:
typedef vector<AllocExtent> SimpleAllocExtentsVector;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO this typedef obscures more than it helps

@liewegas
Copy link
Member

BTW few style nits for this PR and others:

// double-slash comments have a space
//not like this

// comparison operators have spaces, like so
if (a >= b) ;
// not
if (a >=b) ;
// same with braces...
void func(int foo) const {
// not
void func(int foo) const{
// etc.

// 80 columns please

@ifed01
Copy link
Contributor Author

ifed01 commented Jan 24, 2017

@liewegas - resolved and rebased. Please take a look.

@ifed01
Copy link
Contributor Author

ifed01 commented Jan 24, 2017

W.r.t. to do GC at _wctx_finish - I refactored a code to avoid ref_map copying but left GC at the original location to avoid the need for duplicate do_write_data/do_alloc_write/_wctx_finish call sequence for garbage collected data. Still any objections against that?

ref_map.begin()->first == offset &&
ref_map.begin()->second.length == length &&
ref_map.begin()->second.refs == 1;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this hunk is obsolete

<< " unref 0x" << std::hex << o << "~" << l
<< std::dec << dendl;
BlobInfo& bi = affected_blobs.emplace(b, BlobInfo(ref_bytes)).first->second;
bi.referenced_bytes -= l;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do the subtraction up front?

affected_blobs.emplace(b, BlobInfo(ref_bytes - l));

Copy link
Contributor Author

@ifed01 ifed01 Jan 25, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

emplace() call returns previously existed entry for the key instead of inserting a new one. Hence we need to decrement counter afterwards to cover both new and existed entry cases.

// by the write into account too
auto b_it =
affected_blobs.emplace(b, BlobInfo(b->get_referenced_bytes())).first;
BlobInfo& bi = b_it->second;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could skip teh b_it intermediary and assign bi to ...emplace(...).first->second

blob_info_counted = &bi;
}
used_alloc_unit = i;
}
Copy link
Member

@liewegas liewegas Jan 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this loop can be replaced with a

bi.expected_allocations += (alloc_unit_end - alloc_unit_start);
if (used_alloc_unit && used_alloc_unit >= alloc_unit_start && used_alloc_unit < alloc_unit_end) {
  --bi.expected_allocations;
}
blob_info_counted = &bi;
used_alloc_unit = alloc_unit_end;


or similar, right? That avoids looping over AU's in a potentially large blob.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep

// when fake_ref_map is empty since subsequent extents might
// decrement its expected_allocation.
// Hence need to enumerate all the extents first.
bi.collect_candidates.emplace_back(it->logical_offset, it->length);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's better to skip building collection_candidates, and instead, if we decide this blob is toast, just iterate over the extent_map for the blob range. It'll be hot in the CPU cache, and we'll avoid the allocations for the map. Most of the time there won't be extents in that range to other blobs anyway, so we won't even be enumerating more elements than are in collect_candidates, I think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, that makes sense.

@ifed01
Copy link
Contributor Author

ifed01 commented Jan 25, 2017

@liewegas - resolved your comments.

@liewegas
Copy link
Member

can you rebase on master please?

@ifed01
Copy link
Contributor Author

ifed01 commented Jan 25, 2017

Already rebased

@liewegas
Copy link
Member

liewegas commented Jan 25, 2017 via email

Igor Fedotov added 11 commits February 2, 2017 15:22
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
…on from ref_map to ref counting properly.

Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
…e_test.OnodeSizeTracking

Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
Signed-off-by: Igor Fedotov <ifedotov@mirantis.com>
@yuriw
Copy link
Contributor

yuriw commented Feb 10, 2017

@ifed01
Copy link
Contributor Author

ifed01 commented Feb 14, 2017

Made brief analysis for @yuriw report. I don't see any bluestore/GC related issues there.

799447 - unrelated to bluestore: filestore failure

[ RUN ] ObjectStore/StoreTest.Synthetic/1
2017-02-09T02:00:34.485 INFO:teuthology.orchestra.run.smithi194.stderr:seeding object 0
2017-02-09T02:00:35.889 INFO:teuthology.orchestra.run.smithi194.stderr:seeding object 500
2017-02-09T02:00:36.425 INFO:teuthology.orchestra.run.smithi194.stderr:Op 0
2017-02-09T02:00:36.425 INFO:teuthology.orchestra.run.smithi194.stderr:available_objects: 985 in_flight_objects: 15 total objects: 1000 in_flight 15
2017-02-09T02:00:44.702 INFO:teuthology.orchestra.run.smithi194.stderr:2017-02-09 02:00:44.704101 7f2ca6b77a40 1 journal close store_test_temp_journal
2017-02-09T02:00:44.727 INFO:teuthology.orchestra.run.smithi194.stderr:ceph_test_objectstore: /build/ceph-12.0.0-146-g1c274c8/src/test/objectstore/store_test.cc:3867: void SyntheticWorkloadState::fsck(bool): Assertion `r == 0' failed.
2017-02-09T02:00:44.728 INFO:teuthology.orchestra.run.smithi194.stderr:*** Caught signal (Aborted) **
2017-02-09T02:00:44.728 INFO:teuthology.orchestra.run.smithi194.stderr: in thread 7f2ca6b77a40 thread_name:ceph_test_objec
2017-02-09T02:00:44.776 INFO:teuthology.orchestra.run.smithi194.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8)
2017-02-09T02:00:44.776 INFO:teuthology.orchestra.run.smithi194.stderr: 1: (()+0x49f562) [0x55e91f4b6562]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 2: (()+0x11390) [0x7f2ca6766390]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 3: (gsignal()+0x38) [0x7f2c9c21e428]
2017-02-09T02:00:44.777 INFO:teuthology.orchestra.run.smithi194.stderr: 4: (abort()+0x16a) [0x7f2c9c22002a]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 5: (()+0x2dbd7) [0x7f2c9c216bd7]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 6: (()+0x2dc82) [0x7f2c9c216c82]
2017-02-09T02:00:44.780 INFO:teuthology.orchestra.run.smithi194.stderr: 7: (doSyntheticTest(boost::scoped_ptr&, int, unsigned long, unsigned long, unsigned long)+0x1f14) [0x55e91f20a894]

799471 - unrelated to bluestore?

2017-02-09T03:37:59.502 INFO:teuthology.run:Summary data:
{description: 'rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml}
1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml
5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml}
7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade.yaml 9-workload/{rbd-python.yaml
rgw-swift.yaml snaps-many-objects.yaml}}', duration: 6242.836159944534, failure_reason: '''wait_until_healthy''
reached maximum tries (150) after waiting for 900 seconds', flavor: basic, owner: scheduled_yuriw@teuthology,
sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=55623407d6064a718f77dc70c7f95d6f',
status: fail, success: false}

799507 - seems to be unrelated to bluestore/garbage collector

2017-02-09T02:59:00.225 INFO:teuthology.orchestra.run.smithi139.stderr:rm: cannot remove т<80><98>/var/lib/cephт<80><99>: No such file or directory
2017-02-09T02:59:00.226 INFO:teuthology.orchestra.run.smithi139:Running: 'sudo rm -r /var/log/ceph'
2017-02-09T02:59:00.278 INFO:teuthology.orchestra.run.smithi139.stderr:rm: cannot remove т<80><98>/var/log/cephт<80><99>: No such file or directory
2017-02-09T02:59:00.279 INFO:teuthology.orchestra.run.smithi139:Running: 'sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf'
2017-02-09T02:59:00.327 INFO:teuthology.orchestra.run.smithi139.stderr:mv: cannot stat т<80><98>/etc/yum/pluginconf.d/priorities.conf.origт<80><99>: No such file or directory
2017-02-09T02:59:00.328 DEBUG:teuthology.parallel:result is None
2017-02-09T02:59:00.328 INFO:teuthology.nuke:Installed packages removed.
2017-02-09T02:59:00.369 INFO:teuthology.lock:unlocked smithi139.front.sepia.ceph.com
2017-02-09T02:59:00.383 INFO:teuthology.run:Summary data:
{description: 'rados/verify/{1thrash/none.yaml clusters/{fixed-2.yaml openstack.yaml}
fs/btrfs.yaml mon_kv_backend/rocksdb.yaml msgr-failures/few.yaml msgr/simple.yaml
objectstore/bluestore.yaml rados.yaml tasks/rados_api_tests.yaml validater/valgrind.yaml}',
duration: 3257.2218351364136, failure_reason: 'Command failed (workunit test rados/test.sh)
on smithi003 with status 1: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp
&& cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1
CEPH_REF=1c274c80d956d0912f3295395008b43c1ce45620 TESTDIR="/home/ubuntu/cephtest"
CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0
adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 6h
/home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/test.sh''', flavor: notcmalloc,
owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=99ad5a1b336546c4956facf801e3404c',
status: fail, success: false}

2017-02-09T02:59:00.383 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:59:00.449 INFO:teuthology.run:FAIL

799518 - filestore related?

2017-02-09T02:31:18.714 INFO:teuthology.run:Summary data:
{description: 'rados/thrash/{0-size-min-size-overrides/2-size-2-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml
clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/fastclose.yaml
msgr/simple.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/default.yaml
workloads/cache-agent-big.yaml}', duration: 1474.646087884903, failure_reason: '"2017-02-09
02:06:45.974040 mon.2 172.21.15.104:6790/0 3 : cluster [WRN] message from mon.0
was stamped 9.783507s in the future, clocks not synchronized" in cluster log',
flavor: basic, owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=4253a56eed834247bfd909b142992d69',
status: fail, success: false}

2017-02-09T02:31:18.714 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:31:18.772 INFO:teuthology.run:FAIL

799574: bluestore/garbage collector unrelated

2017-02-09T02:25:55.661 INFO:tasks.workunit.client.0.smithi168.stdout:[ RUN ] EnvLibradosMutipoolTest.DBBasics
2017-02-09T02:26:05.669 INFO:tasks.workunit.client.0.smithi168.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: In func
tion 'void ceph::crypto::init(CephContext*)' thread 7f132bc11680 time 2017-02-09 02:26:05.669712
2017-02-09T02:26:05.669 INFO:tasks.workunit.client.0.smithi168.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: 77: FAI
LED assert(crypto_context != __null)
2017-02-09T02:26:05.685 INFO:tasks.workunit.client.0.smithi168.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8
c1ce45620)
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x10e) [0x7f1321876bbe]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 2: (ceph::crypto::shutdown()+0) [0x7f1321a4bbb0]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 3: (CephContext::init_crypto()+0x15) [0x7f13219fa105]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 4: (common_init_finish(CephContext*)+0x10) [0x7f13219f6dd0]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 5: (librados::RadosClient::connect()+0x1d) [0x7f132b76708d]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 6: ./env_librados_test() [0x4df159]
2017-02-09T02:26:05.686 INFO:tasks.workunit.client.0.smithi168.stderr: 7: ./env_librados_test() [0x4df638]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 8: ./env_librados_test() [0x783fb9]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 9: ./env_librados_test() [0x77701a]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 10: ./env_librados_test() [0x777117]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 11: ./env_librados_test() [0x77739d]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 12: ./env_librados_test() [0x777683]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 13: ./env_librados_test() [0x410ae1]
2017-02-09T02:26:05.687 INFO:tasks.workunit.client.0.smithi168.stderr: 14: (__libc_start_main()+0xf5) [0x7f132a0b6f45]
2017-02-09T02:26:05.688 INFO:tasks.workunit.client.0.smithi168.stderr: 15: ./env_librados_test() [0x4ca0da]

799751 : bluestore unrelated?

017-02-09T06:43:45.521 INFO:teuthology.run:Summary data:
{description: 'rados/upgrade/jewel-x-singleton/{0-cluster/{openstack.yaml start.yaml}
1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml
5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml}
7-workload/{radosbench.yaml rbd_api.yaml} 8-finish-upgrade.yaml 9-workload/{rbd-python.yaml
rgw-swift.yaml snaps-many-objects.yaml}}', duration: 6203.694232940674, failure_reason: '''wait_until_healthy''
reached maximum tries (150) after waiting for 900 seconds', flavor: basic, owner: scheduled_yuriw@teuthology,
sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=7df6c08c08b7485f865e7a99d90798c2',
status: fail, success: false}

2017-02-09T06:43:45.522 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T06:43:45.573 INFO:teuthology.run:FAIL

799755 - unrelated to bluestore:

[ RUN ] EnvLibradosMutipoolTest.DBBulkLoadKeysInRandomOrder
2017-02-09T05:05:32.358 INFO:tasks.workunit.client.0.smithi204.stdout:Test size : loop(64); bulk_size(32768)
2017-02-09T05:05:53.932 INFO:tasks.workunit.client.0.smithi204.stdout:Time by default : 7489ms
2017-02-09T05:06:03.417 INFO:tasks.workunit.client.0.smithi204.stdout:Time by librados : 9427ms
2017-02-09T05:06:05.326 INFO:tasks.workunit.client.0.smithi204.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: In function 'void ceph::crypto::init(CephContext*)' thread 7fc88a177700 time 2017-02-09 05:06:05.327386
2017-02-09T05:06:05.326 INFO:tasks.workunit.client.0.smithi204.stderr:/build/ceph-12.0.0-146-g1c274c8/src/common/ceph_crypto.cc: 77: FAILED assert(crypto_context != __null)
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: ceph version 12.0.0-146-g1c274c8 (1c274c8)
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fc87fcfbc42]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 2: (ceph::crypto::init(CephContext*)+0x104) [0x7fc87ff01734]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 3: (CephContext::init_crypto()+0x19) [0x7fc87feaa829]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 4: (common_init_finish(CephContext*)+0x10) [0x7fc87fea7700]
2017-02-09T05:06:05.342 INFO:tasks.workunit.client.0.smithi204.stderr: 5: (librados::RadosClient::connect()+0x30) [0x7fc889cc8860]

799758 - unrelated to bluestore: filestore failure

[ RUN ] ObjectStore/StoreTest.Synthetic/1
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852354 7f3f19300a40 1 filestore(filestore.test_temp_dir) leveldb db exists/created
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852459 7f3f19300a40 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.852467 7f3f19300a40 1 journal _open store_test_temp_journal fd 6: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868591 7f3f19300a40 -1 journal check: ondisk fsid 21e1bc96-5015-4334-9c2a-4336bf84b7ee doesn't match expected 30e938ea-85d8-456d-9aee-69fefaa993fd, invalid (someone else's?) journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868605 7f3f19300a40 1 journal close store_test_temp_journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.868629 7f3f19300a40 1 journal _open store_test_temp_journal fd 6: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.885642 7f3f19300a40 0 filestore(filestore.test_temp_dir) mkjournal created journal on store_test_temp_journal
2017-02-09T05:12:40.518 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927676 7f3f19300a40 1 filestore(filestore.test_temp_dir) mkfs done in filestore.test_temp_dir
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927779 7f3f19300a40 0 filestore(filestore.test_temp_dir) backend generic (magic 0xef53)
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.927788 7f3f19300a40 -1 filestore(filestore.test_temp_dir) WARNING: max attr value size (1024) is smaller than osd_max_object_name_len (2048). Your backend filesystem appears to not support attrs large enough to handle the configured max rados name size. You may get unexpected ENAMETOOLONG errors on rados operations or buggy behavior
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928079 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928085 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:39.928087 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: splice() is disabled via 'filestore splice' config option
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.046987 7f3f19300a40 0 genericfilestorebackend(filestore.test_temp_dir) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.195049 7f3f19300a40 0 filestore(filestore.test_temp_dir) limited size xattrs
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.195304 7f3f19300a40 0 filestore(filestore.test_temp_dir) start omap initiation
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279225 7f3f19300a40 0 filestore(filestore.test_temp_dir) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2017-02-09T05:12:40.519 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279336 7f3f19300a40 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279340 7f3f19300a40 1 journal _open store_test_temp_journal fd 56: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.279723 7f3f19300a40 1 journal _open store_test_temp_journal fd 56: 419430400 bytes, block size 4096 bytes, directio = 1, aio = 0
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:2017-02-09 05:12:40.280000 7f3f19300a40 1 filestore(filestore.test_temp_dir) upgrade
2017-02-09T05:12:40.520 INFO:teuthology.orchestra.run.smithi113.stderr:seeding object 0
2017-02-09T05:12:41.801 INFO:teuthology.orchestra.run.smithi113.stderr:seeding object 500
2017-02-09T05:12:42.347 INFO:teuthology.orchestra.run.smithi113.stderr:Op 0
2017-02-09T05:12:42.347 INFO:teuthology.orchestra.run.smithi113.stderr:available_objects: 984 in_flight_objects: 16 total objects: 1000 in_flight 16

799610: bluestore unrelated?

2017-02-09T02:54:13.211 INFO:teuthology.run:Summary data:
{description: 'rados/thrash/{0-size-min-size-overrides/2-size-1-min-size.yaml 1-pg-log-overrides/normal_pg_log.yaml
clusters/{fixed-2.yaml openstack.yaml} fs/btrfs.yaml msgr-failures/osd-delay.yaml
msgr/async.yaml objectstore/filestore.yaml rados.yaml rocksdb.yaml thrashers/morepggrow.yaml
workloads/admin_socket_objecter_requests.yaml}', duration: 1424.7001299858093,
failure_reason: '"2017-02-09 02:30:12.560656 mon.1 172.21.15.165:6789/0 2 : cluster
[WRN] message from mon.0 was stamped 9.243840s in the future, clocks not synchronized"
in cluster log', flavor: basic, owner: scheduled_yuriw@teuthology, sentry_event: 'http://sentry.ceph.com/sepia/teuthology/?q=52a4877e2a134f1ca9b7c3f0fa3af132',
status: fail, success: false}

2017-02-09T02:54:13.212 DEBUG:teuthology.report:Pushing job info to http://paddles.front.sepia.ceph.com/
2017-02-09T02:54:13.265 INFO:teuthology.run:FAIL

@liewegas liewegas merged commit e53af89 into ceph:master Feb 16, 2017
@ifed01 ifed01 deleted the wip-bluestore-no-blobdepth branch November 9, 2017 15:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants