Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FileStore: support multiple ondisk finish and apply finisher #6486

Merged
merged 1 commit into from Nov 13, 2015

Conversation

XinzeChi
Copy link
Contributor

@XinzeChi XinzeChi commented Nov 6, 2015

ondisk finisher and apply finisher wouble be bottleneck in fast
ssd. Because apply finisher should free memory which may be slow.

Signed-off-by: Haomai Wang haomai@xsky.com
Signed-off-by: Xinze Chi xinze@xsky.com

@XinzeChi
Copy link
Contributor Author

XinzeChi commented Nov 6, 2015

@liewegas , please reivew it, thanks.

throttle_ops(g_ceph_context, "filestore_ops",g_conf->filestore_queue_max_ops),
throttle_bytes(g_ceph_context, "filestore_bytes",g_conf->filestore_queue_max_bytes),
op_finisher(g_ceph_context),
ondisk_finisher_num(g_conf->filestore_ondisk_finisher_threads),
apply_finisher_num(g_conf->filestore_apply_finisher_threads),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefix these with m_ and put them next to all the other tunables, a few lines down?

@liewegas
Copy link
Member

liewegas commented Nov 6, 2015

Cosmetic issues aside this looks good to me!

ondisk finisher and apply finisher wouble be bottleneck in fast
ssd. Because apply finisher should free memory which may be slow.

Signed-off-by: Haomai Wang <haomai@xsky.com>
Signed-off-by: Xinze Chi <xinze@xsky.com>
@cxwshawn
Copy link

cxwshawn commented Nov 7, 2015

@XinzeChi @liewegas @yuyuyu101 I don't know if it is better if implement member control in Finisher(default 1, will not affect other caller) ?

@xinxinsh
Copy link

xinxinsh commented Nov 9, 2015

@XinzeChi , do you have any performance number to show ondisk finisher and apply finisher is bottleneck for ssd

@XinzeChi
Copy link
Contributor Author

XinzeChi commented Nov 9, 2015

the number of apply finisher is not the only bottleneck. #6484 is another bottleneck. there also are other bottlenecks for ssd. I would pull more request in next step after #6486 and #6484 merge.

@athanatos athanatos changed the title FileStore: support multiple ondisk finish and apply finisher DNM: FileStore: support multiple ondisk finish and apply finisher Nov 9, 2015
@athanatos athanatos changed the title DNM: FileStore: support multiple ondisk finish and apply finisher FileStore: support multiple ondisk finish and apply finisher Nov 9, 2015
@athanatos
Copy link
Contributor

Sorry, misread something.

@liewegas
Copy link
Member

@XinzeChi This passed testing but I just realized it was with the default of 1 instance. Updating the teuthology ceph.conf to set this to 3 and I'll run it through again.

liewegas added a commit that referenced this pull request Nov 13, 2015
osd: FileStore: support multiple ondisk finish and apply finishers

Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Samuel Just <sjust@redhat.com>
@liewegas liewegas merged commit bc45067 into ceph:master Nov 13, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants