New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ceph-fuse: fix fsync() #6388
ceph-fuse: fix fsync() #6388
Conversation
So fsync is only broken if the ObjectCacher is disabled, right? There may be some other things broken if you do that (at least, I knew it was broken without the cacher, but it's possible this is the only broken bit), but if you think it works we should add some tests to the qa-suite that demonstrate that and make sure it stays good. |
// wait for unsafe mds requests | ||
wait_unsafe_requests(); | ||
|
||
wait_sync_caps(flush_tid); | ||
|
||
// flush file data | ||
// FIXME |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So sync_fs actually still doesn't work, since we aren't waiting on OSD requests? Should fix that up while doing the rest of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On Wed, Oct 28, 2015 at 8:10 AM, Gregory Farnum notifications@github.com
wrote:
In src/client/Client.cc
#6388 (comment):// flush caps
flush_caps();
- wait_sync_caps(last_flush_tid);
- ceph_tid_t flush_tid = last_flush_tid;
- // wait for unsafe mds requests
- wait_unsafe_requests();
- wait_sync_caps(flush_tid);
// flush file data
// FIXMESo sync_fs actually still doesn't work, since we aren't waiting on OSD
requests? Should fix that up while doing the rest of it.
I added code that flush objectcacher. but sync_fs() still does not wait for
unsafe sync write.
—
Reply to this email directly or view it on GitHub
https://github.com/ceph/ceph/pull/6388/files#r43205480.
I think we're okay with not blocking on sync writes; the man page states:
|
This needs a rebase |
@ukernel please ignore the known false negative from the bot ( http://tracker.ceph.com/issues/13592 ). You can rebase and repush to run it again. |
This hasn't broken anything and looks good to me, but I don't think we have any tests that exercise it well, do we? @ukernel, can you come up with a teuthology unit test to avoid future regressions? |
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Unsafe requests in Inode::unsafe_ops are not sorted in tid order. So checking first unsafe request's tid does not make sense. Unsafe requests are requests that are being journalled, they are arranged in order of journal position. So waiting for the latest unsafe requests should be enough. Signed-off-by: Yan, Zheng <zyan@redhat.com>
so we can find dirty buffer heads in given a ObjectSet easily. Signed-off-by: Yan, Zheng <zyan@redhat.com>
Buffer heads in dirty_or_tx_bh are sorted in ObjectSet/Object/offset order. So we can find dirty buffer heads in the given ObjectSet and flush them only. Signed-off-by: Yan, Zheng <zyan@redhat.com>
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Signed-off-by: Yan, Zheng <zyan@redhat.com>
http://pulpito.ceph.com/gregf-2016-01-06_06:40:32-fs-greg-13583-test---basic-smithi/ Valgrind failures are OSDs (fixed now), libcephfs/test.sh issues are directory listing bug just fixed |
ceph-fuse: fix fsync() Reviewed-by: Greg Farnum <gfarnum@redhat.com>
No description provided.