Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compaction results in read operation latency increase suddently #188

Closed
AaronCsy opened this issue Jul 4, 2014 · 2 comments
Closed

Compaction results in read operation latency increase suddently #188

AaronCsy opened this issue Jul 4, 2014 · 2 comments

Comments

@AaronCsy
Copy link

AaronCsy commented Jul 4, 2014

Recently I was benchmarking rocksdb using YCSB(a benchmark tool by Yahoo).

I found when the background compaction starts, the read latency will increase significantly.

Below is the benchmark output screenshot, we can see at first the throughput could be over 25000, and then it fell to under 1000. And from the DB log I found at that moment, there were compactions happening. The read latency increased significantly.

1090 sec: 26162378 operations; 28859.61 current ops/sec; [UPDATE AverageLatency(us)=489.62] [READ AverageLatency(us)=607.11]
1100 sec: 26438233 operations; 27582.74 current ops/sec; [UPDATE AverageLatency(us)=490.13] [READ AverageLatency(us)=657.91]
1110 sec: 26672131 operations; 23389.8 current ops/sec; [UPDATE AverageLatency(us)=497.78] [READ AverageLatency(us)=857.63]
1120 sec: 26958109 operations; 28594.94 current ops/sec; [UPDATE AverageLatency(us)=484.1] [READ AverageLatency(us)=622.79]
1130 sec: 27223242 operations; 26510.65 current ops/sec; [UPDATE AverageLatency(us)=493.76] [READ AverageLatency(us)=700.99]
1140 sec: 27429248 operations; 20598.54 current ops/sec; [UPDATE AverageLatency(us)=842.26] [READ AverageLatency(us)=699.32]
1150 sec: 27701955 operations; 27270.7 current ops/sec; [UPDATE AverageLatency(us)=480.39] [READ AverageLatency(us)=680.57]
1160 sec: 27974564 operations; 27258.17 current ops/sec; [UPDATE AverageLatency(us)=476.12] [READ AverageLatency(us)=685.31]
1170 sec: 28191861 operations; 21727.53 current ops/sec; [UPDATE AverageLatency(us)=776.45] [READ AverageLatency(us)=684.53]
1180 sec: 28466739 operations; 27485.05 current ops/sec; [UPDATE AverageLatency(us)=486.75] [READ AverageLatency(us)=665.02]
1190 sec: 28558575 operations; 9182.68 current ops/sec; [UPDATE AverageLatency(us)=442.44] [READ AverageLatency(us)=3015.67]
1200 sec: 28711367 operations; 15279.2 current ops/sec; [UPDATE AverageLatency(us)=474.88] [READ AverageLatency(us)=1611.89]
1210 sec: 28894289 operations; 18290.37 current ops/sec; [UPDATE AverageLatency(us)=471.45] [READ AverageLatency(us)=1266.77]
1220 sec: 29028776 operations; 13447.36 current ops/sec; [UPDATE AverageLatency(us)=1534.71] [READ AverageLatency(us)=832]
1230 sec: 29162725 operations; 13393.56 current ops/sec; [UPDATE AverageLatency(us)=482.13] [READ AverageLatency(us)=1878.88]
1240 sec: 29287465 operations; 12474 current ops/sec; [UPDATE AverageLatency(us)=473.34] [READ AverageLatency(us)=2105.42]
1250 sec: 29408489 operations; 12101.19 current ops/sec; [UPDATE AverageLatency(us)=1069.27] [READ AverageLatency(us)=1563.6]
1260 sec: 29586242 operations; 17773.52 current ops/sec; [UPDATE AverageLatency(us)=474.36] [READ AverageLatency(us)=1318.15]
1270 sec: 29781131 operations; 19486.95 current ops/sec; [UPDATE AverageLatency(us)=475.59] [READ AverageLatency(us)=1157.17]
1280 sec: 29906445 operations; 12531.4 current ops/sec; [UPDATE AverageLatency(us)=473.33] [READ AverageLatency(us)=2067.17]
1290 sec: 30052515 operations; 14605.54 current ops/sec; [UPDATE AverageLatency(us)=818.86] [READ AverageLatency(us)=1353.53]
1300 sec: 30228448 operations; 17591.54 current ops/sec; [UPDATE AverageLatency(us)=503.73] [READ AverageLatency(us)=1302.31]
1310 sec: 30401131 operations; 17266.57 current ops/sec; [UPDATE AverageLatency(us)=487.11] [READ AverageLatency(us)=1351.72]
1320 sec: 30552821 operations; 15169 current ops/sec; [UPDATE AverageLatency(us)=445.6] [READ AverageLatency(us)=1622.69]
1330 sec: 30706651 operations; 15381.46 current ops/sec; [UPDATE AverageLatency(us)=442.76] [READ AverageLatency(us)=1625.45]
1340 sec: 30880309 operations; 17364.06 current ops/sec; [UPDATE AverageLatency(us)=474.42] [READ AverageLatency(us)=1378.06]
1350 sec: 31039154 operations; 15882.91 current ops/sec; [UPDATE AverageLatency(us)=471.2] [READ AverageLatency(us)=1166.09]
1360 sec: 31208994 operations; 16982.3 current ops/sec; [UPDATE AverageLatency(us)=1027.61] [READ AverageLatency(us)=1187.31]
1370 sec: 31300371 operations; 9137.7 current ops/sec; [UPDATE AverageLatency(us)=507.84] [READ AverageLatency(us)=2896.62]
1380 sec: 31317818 operations; 1744.53 current ops/sec; [UPDATE AverageLatency(us)=12245.85] [READ AverageLatency(us)=6128.73]
1390 sec: 31328518 operations; 1069.89 current ops/sec; [UPDATE AverageLatency(us)=20091.53] [READ AverageLatency(us)=9836.66]
1400 sec: 31342257 operations; 1373.76 current ops/sec; [UPDATE AverageLatency(us)=16708.57] [READ AverageLatency(us)=6698.25]
1410 sec: 31351652 operations; 939.5 current ops/sec; [UPDATE AverageLatency(us)=14938.25] [READ AverageLatency(us)=19349.23]
1420 sec: 31358287 operations; 663.43 current ops/sec; [UPDATE AverageLatency(us)=4225.67] [READ AverageLatency(us)=43495.25]

Oh, my configuration is:
cache_size: 48
block_size: 32
write_buffer_size: 128(MB)
compression: yes
max_open_files: 500000
target_file_size_base: 512(MB)
max_bytes_for_level_base: 5120(MB)
level0_file_num_compaction_trigger: 4
level0_slowdown_writes_trigger: 16
level0_stop_writes_trigger: 64
disableDataSync: no
max_background_compactions: 8
max_write_buffer_number: 16
rate_limit_delay_max_milliseconds: 1000
arena_block_size: 12800(KB)
# in second
delete_obsolete_files_period_micros: 300
#disable_seek_compaction
disable_seek_compaction: yes
num_levels: 7
min_write_buffer_number_to_merge: 4
max_background_flushes: 2

Could someone help me to explain this strange situation?

@mdcallag
Copy link
Contributor

mdcallag commented Jul 4, 2014

What is your storage setup -- flash/disk, number of devices, etc? With
max_background_compactions=8 you can have 8 threads doing concurrent IO
from compaction.

iostat output might help to explain what happens.

On Fri, Jul 4, 2014 at 12:19 AM, AaronCsy notifications@github.com wrote:

Recently I was benchmarking rocksdb using YCSB(a benchmark tool by Yahoo).

I found when the background compaction starts, the read latency will
increase significantly.

Below is the benchmark output screenshot, we can see at first the
throughput could be over 25000, and then it fell to under 1000. And from
the DB log I found at that moment, there were compactions happening. The
read latency increased significantly.

1090 sec: 26162378 operations; 28859.61 current ops/sec; [UPDATE
AverageLatency(us)=489.62] [READ AverageLatency(us)=607.11]
1100 sec: 26438233 operations; 27582.74 current ops/sec; [UPDATE
AverageLatency(us)=490.13] [READ AverageLatency(us)=657.91]
1110 sec: 26672131 operations; 23389.8 current ops/sec; [UPDATE
AverageLatency(us)=497.78] [READ AverageLatency(us)=857.63]
1120 sec: 26958109 operations; 28594.94 current ops/sec; [UPDATE
AverageLatency(us)=484.1] [READ AverageLatency(us)=622.79]
1130 sec: 27223242 operations; 26510.65 current ops/sec; [UPDATE
AverageLatency(us)=493.76] [READ AverageLatency(us)=700.99]
1140 sec: 27429248 operations; 20598.54 current ops/sec; [UPDATE
AverageLatency(us)=842.26] [READ AverageLatency(us)=699.32]
1150 sec: 27701955 operations; 27270.7 current ops/sec; [UPDATE
AverageLatency(us)=480.39] [READ AverageLatency(us)=680.57]
1160 sec: 27974564 operations; 27258.17 current ops/sec; [UPDATE
AverageLatency(us)=476.12] [READ AverageLatency(us)=685.31]
1170 sec: 28191861 operations; 21727.53 current ops/sec; [UPDATE
AverageLatency(us)=776.45] [READ AverageLatency(us)=684.53]
1180 sec: 28466739 operations; 27485.05 current ops/sec; [UPDATE
AverageLatency(us)=486.75] [READ AverageLatency(us)=665.02]
1190 sec: 28558575 operations; 9182.68 current ops/sec; [UPDATE
AverageLatency(us)=442.44] [READ AverageLatency(us)=3015.67]
1200 sec: 28711367 operations; 15279.2 current ops/sec; [UPDATE
AverageLatency(us)=474.88] [READ AverageLatency(us)=1611.89]
1210 sec: 28894289 operations; 18290.37 current ops/sec; [UPDATE
AverageLatency(us)=471.45] [READ AverageLatency(us)=1266.77]
1220 sec: 29028776 operations; 13447.36 current ops/sec; [UPDATE
AverageLatency(us)=1534.71] [READ AverageLatency(us)=832]
1230 sec: 29162725 operations; 13393.56 current ops/sec; [UPDATE
AverageLatency(us)=482.13] [READ AverageLatency(us)=1878.88]
1240 sec: 29287465 operations; 12474 current ops/sec; [UPDATE
AverageLatency(us)=473.34] [READ AverageLatency(us)=2105.42]
1250 sec: 29408489 operations; 12101.19 current ops/sec; [UPDATE
AverageLatency(us)=1069.27] [READ AverageLatency(us)=1563.6]
1260 sec: 29586242 operations; 17773.52 current ops/sec; [UPDATE
AverageLatency(us)=474.36] [READ AverageLatency(us)=1318.15]
1270 sec: 29781131 operations; 19486.95 current ops/sec; [UPDATE
AverageLatency(us)=475.59] [READ AverageLatency(us)=1157.17]
1280 sec: 29906445 operations; 12531.4 current ops/sec; [UPDATE
AverageLatency(us)=473.33] [READ AverageLatency(us)=2067.17]
1290 sec: 30052515 operations; 14605.54 current ops/sec; [UPDATE
AverageLatency(us)=818.86] [READ AverageLatency(us)=1353.53]
1300 sec: 30228448 operations; 17591.54 current ops/sec; [UPDATE
AverageLatency(us)=503.73] [READ AverageLatency(us)=1302.31]
1310 sec: 30401131 operations; 17266.57 current ops/sec; [UPDATE
AverageLatency(us)=487.11] [READ AverageLatency(us)=1351.72]
1320 sec: 30552821 operations; 15169 current ops/sec; [UPDATE
AverageLatency(us)=445.6] [READ AverageLatency(us)=1622.69]
1330 sec: 30706651 operations; 15381.46 current ops/sec; [UPDATE
AverageLatency(us)=442.76] [READ AverageLatency(us)=1625.45]
1340 sec: 30880309 operations; 17364.06 current ops/sec; [UPDATE
AverageLatency(us)=474.42] [READ AverageLatency(us)=1378.06]
1350 sec: 31039154 operations; 15882.91 current ops/sec; [UPDATE
AverageLatency(us)=471.2] [READ AverageLatency(us)=1166.09]
1360 sec: 31208994 operations; 16982.3 current ops/sec; [UPDATE
AverageLatency(us)=1027.61] [READ AverageLatency(us)=1187.31]
1370 sec: 31300371 operations; 9137.7 current ops/sec; [UPDATE
AverageLatency(us)=507.84] [READ AverageLatency(us)=2896.62]
1380 sec: 31317818 operations; 1744.53 current ops/sec; [UPDATE
AverageLatency(us)=12245.85] [READ AverageLatency(us)=6128.73]
1390 sec: 31328518 operations; 1069.89 current ops/sec; [UPDATE
AverageLatency(us)=20091.53] [READ AverageLatency(us)=9836.66]
1400 sec: 31342257 operations; 1373.76 current ops/sec; [UPDATE
AverageLatency(us)=16708.57] [READ AverageLatency(us)=6698.25]
1410 sec: 31351652 operations; 939.5 current ops/sec; [UPDATE
AverageLatency(us)=14938.25] [READ AverageLatency(us)=19349.23]
1420 sec: 31358287 operations; 663.43 current ops/sec; [UPDATE
AverageLatency(us)=4225.67] [READ AverageLatency(us)=43495.25]

Oh, my configuration is:
cache_size: 48
block_size: 32
write_buffer_size: 128(MB)
compression: yes
max_open_files: 500000
target_file_size_base: 512(MB)
max_bytes_for_level_base: 5120(MB)
level0_file_num_compaction_trigger: 4
level0_slowdown_writes_trigger: 16
level0_stop_writes_trigger: 64
disableDataSync: no
max_background_compactions: 8
max_write_buffer_number: 16
rate_limit_delay_max_milliseconds: 1000
arena_block_size: 12800(KB)

in second

delete_obsolete_files_period_micros: 300
#disable_seek_compaction
disable_seek_compaction: yes
num_levels: 7
min_write_buffer_number_to_merge: 4
max_background_flushes: 2

Could someone help me to explain this strange situation?


Reply to this email directly or view it on GitHub
#188.

Mark Callaghan
mdcallag@gmail.com

@igorcanadi
Copy link
Collaborator

@AaronCsy now you can try using Options::rate_limiter that will limit compaction write_rate, and give much better read SLA.

Since this issue is old, closing it for now, but please feel free to open a new one with any new questions.

Nazgolze pushed a commit to Nazgolze/rocksdb-1 that referenced this issue Sep 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants