Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

os/bluestore: trim cache every 50ms (instead of 200ms) #20498

Merged
merged 1 commit into from Feb 28, 2018

Conversation

liewegas
Copy link
Member

In small cache size situations trimming needs to be more frequent. See
https://tracker.ceph.com/issues/22616

This isn't a complete solution: in very low memory situations an even lower
value would be needed, or perhaps bluestore_default_buffered_read=false.

Signed-off-by: Sage Weil sage@redhat.com

In small cache size situations trimming needs to be more frequent.  See
https://tracker.ceph.com/issues/22616

This isn't a complete solution: in very low memory situations an even lower
value would be needed, or perhaps bluestore_default_buffered_read=false.

Signed-off-by: Sage Weil <sage@redhat.com>
Copy link
Contributor

@rzarzynski rzarzynski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love to see cache trimming triggered on event, not only on elapsed time since last swept.

@ifed01
Copy link
Contributor

ifed01 commented Feb 21, 2018

I'm a bit nervous about the performance impact caused by this change...

@liewegas
Copy link
Member Author

On one hand we should expect to trim 1/4 as many items 4x as often, so no net change. And it means the trimming spikes will be shorter, so the tail latency should improve.

On the other hand, there may be some fix overhead of the trim process itself. (I'm not really sure what, though.) FWIW back in the beginning we were trimming on every request. I don't think we ever did a comparison with the sharded cache change, though.

@tchaikov tchaikov merged commit 8c31fbb into ceph:master Feb 28, 2018
@tchaikov
Copy link
Contributor

@liewegas i am adding backport=luminous to the corresponding ticket. please let me know if i am wrong.

@liewegas liewegas deleted the wip-22616 branch February 28, 2018 14:34
@liewegas
Copy link
Member Author

thanks!

@xiexingguo
Copy link
Member

FYI: actually our local test results reveal a measurable 8k-random-read/write performance regression with this patch applied..

Was:

Test result (bandwidth: MB/s, latency: ms):
         type    bandwidth            iops         latency
---------------------------------------------------------------------
     1M_write          730             730             215
      1M_read         1225            1225             130
        1M_rw          886             886             179
 8K_randwrite           58            7531              21
  8K_randread           73            9357              17
    8K_randrw          121           15713               9
     8K_write          103           13214              12
      8K_read          211           27101               5
        8K_rw          187           23956               5
---------------------------------------------------------------------

Now:

Test result (bandwidth: MB/s, latency: ms):
         type    bandwidth            iops         latency
---------------------------------------------------------------------
     1M_write          727             727             216
      1M_read         1230            1230             130
        1M_rw          885             885             179
 8K_randwrite           55            7093              22
  8K_randread           69            8946              17
    8K_randrw          115           14859              10
     8K_write           98           12587              12
      8K_read          212           27182               5
        8K_rw          187           24052               5
---------------------------------------------------------------------

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants