Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write-amp increased for fillseq with universal compaction #10082

Open
mdcallag opened this issue May 31, 2022 · 4 comments
Open

Write-amp increased for fillseq with universal compaction #10082

mdcallag opened this issue May 31, 2022 · 4 comments
Labels
performance Issues related to performance that may or may not be bugs

Comments

@mdcallag
Copy link
Contributor

This occurs after 4.1 but in or before 5.1.4.

  • Throughput is ~1.1X better for 4.1
  • Write-amp is ~1.4X larger for 5.1.4
  • Compaction wall clock seconds are ~1.7X larger for 5.1.4

Notice in the compaction IO stats below that with v4.1 there is data in L0 and L7. While with v5.1.4 there is data in L0, L2, L3, L4, L5, L6. For v5.1.4 only trivial move is used for L2 through L6. But with v5.1.4 the amount of compaction within L0 is larger -- see the Write(GB) column in the compaction IO stats.

I don't know whether the issue is from the extra compaction in L0 or from the extra trivial moves from the additional levels. Also see issues 10075 and 9423.

Perf for 4.1

fillseq      :       0.844 micros/op 1184197 ops/sec;  474.3 MB/s
Microseconds per op:
Count: 4000000000  Average: 0.8445  StdDev: 13.15
Min: 0.0000  Median: 0.5152  Max: 14651.0000
Percentiles: P50: 0.52 P75: 0.77 P99: 2.37 P99.9: 5.40 P99.99: 484.37

Perf for 5.1.4

fillseq      :       0.937 micros/op 1067365 ops/sec;  427.5 MB/s
Microseconds per write:
Count: 4000000000 Average: 0.9369  StdDev: 0.91
Min: 0  Median: 0.5115  Max: 3390828
Percentiles: P50: 0.51 P75: 0.77 P99: 2.41 P99.9: 5.75 P99.99: 795.25

Compaction IO stats at test end for 4.1

Level    Files   Size(MB) Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) Stall(cnt)  KeyIn KeyDrop
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
  L0     13/12    109.92   0.0    870.1     0.0    870.1    1741.4    871.3       0.0   0.0     65.4    130.9     13618    123252    0.110      16485   3973M      0
  L7  17861/0   892122.30   0.0      0.0     0.0      0.0       0.0      0.0     871.2   0.0      0.0      0.0         0         0    0.000          0       0      0
 Sum  17874/12  892232.22   0.0    870.1     0.0    870.1    1741.4    871.3     871.2   2.0     65.4    130.9     13618    123252    0.110      16485   3973M      0
 Int      0/0       0.00   0.0      5.0     0.0      5.0      10.1      5.0       5.1   2.0     64.9    129.8        80       714    0.112         37     23M      0
Flush(GB): cumulative 871.406, interval 5.045
Stalls(count): 16485 level0_slowdown, 16485 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 pending_compaction_bytes, 0 memtable_compaction, 0 leveln_slowdown_soft, 0 leveln_slowdown_hard

** DB Stats **
Uptime(secs): 3360.3 total, 20.0 interval
Cumulative writes: 3979M writes, 3979M keys, 3979M batches, 1.0 writes per batch, ingest: 1616.01 GB, 492.45 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative compaction: 1741.42 GB write, 530.67 MB/s write, 870.10 GB read, 265.15 MB/s read, 13618.3 seconds
Cumulative stall: 00:05:6.599 H:M:S, 9.1 percent

Compaction IO stats at test end for 5.1.4

evel    Files   Size(MB} Score Read(GB}  Rn(GB} Rnp1(GB} Write(GB} Wnew(GB} Moved(GB} W-Amp Rd(MB/s} Wr(MB/s} Comp(sec} Comp(cnt} Avg(sec} KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      8/6     118.78   0.0   1614.1     0.0   1614.1    2491.5    877.4       0.0   0.0     73.3    113.1     22553    178836    0.126   7346M      0
  L2    115/0    3978.77   0.0      0.0     0.0      0.0       0.0      0.0     758.6   0.0      0.0      0.0         0         0    0.000       0      0
  L3   2172/0   64357.19   0.0      0.0     0.0      0.0       0.0      0.0     633.5   0.0      0.0      0.0         0         0    0.000       0      0
  L4   2563/0   70176.73   0.0      0.0     0.0      0.0       0.0      0.0     419.9   0.0      0.0      0.0         0         0    0.000       0      0
  L5   2506/0   77268.72   0.0      0.0     0.0      0.0       0.0      0.0     366.2   0.0      0.0      0.0         0         0    0.000       0      0
  L6   6849/0   180472.26   0.0      0.0     0.0      0.0       0.0      0.0     395.3   0.0      0.0      0.0         0         0    0.000       0      0
 Sum  30569/6   898441.39   0.0   1614.1     0.0   1614.1    2491.5    877.4    3063.6   2.8     73.3    113.1     22553    178836    0.126   7346M      0
Uptime(secs): 3740.2 total, 3740.2 interval
Flush(GB): cumulative 877.468, interval 3.960
Cumulative compaction: 2491.51 GB write, 682.13 MB/s write, 1614.13 GB read, 441.92 MB/s read, 22553.0 seconds
Stalls(count): 2088 level0_slowdown, 2088 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 3 memtable_compaction, 129 memt
able_slowdown, interval 0 total count

** DB Stats **
Uptime(secs): 3740.2 total, 20.0 interval
Cumulative writes: 3993M writes, 3993M keys, 3993M commit groups, 1.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:55.260 H:M:S, 1.5 percent

Command lines for 4.1 and 5.1.4

# for 4.1
./db_bench --benchmarks=fillseq --level0_file_num_compaction_trigger=8 --level0_slowdown_writes_trigger=20 --level0_stop_writes_trigger=30 --max_background_flushes=4 --max_background_compactions=12 --max_write_buffer_number=8 --db=/data/m/rx --wal_dir=/data/m/rx --num=4000000000 --num_levels=8 --key_size=20 --value_size=400 --block_size=8192 --cache_size=225485783040 --cache_numshardbits=6 --compression_ratio=0.5 --compression_type=lz4 --bytes_per_sync=8388608 --cache_index_and_filter_blocks=1 --benchmark_write_rate_limit=0 --writes_per_second=0 --write_buffer_size=16777216 --target_file_size_base=16777216 --max_bytes_for_level_base=67108864 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=20 --report_interval_seconds=5 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --subcompactions=4 --compaction_style=1 --universal_compression_size_percent=80 --universal_min_merge_width=2 --universal_max_merge_width=20 --universal_size_ratio=1 --universal_max_size_amplification_percent=200 --universal_allow_trivial_move=1 --universal_compression_size_percent=-1 --use_existing_db=0 --sync=0 --threads=1 --memtablerep=vector --disable_wal=1 --seed=1649938008 --report_file=bm.uc.nt32.cm1.d0.sc4.tm/v4.1/benchmark_fillseq.wal_disabled.v400.log.r.csv

# for 5.1.4
./db_bench --benchmarks=fillseq --allow_concurrent_memtable_write=false --level0_file_num_compaction_trigger=8 --level0_slowdown_writes_trigger=20 --level0_stop_writes_trigger=30 --max_background_flushes=4 --max_background_compactions=12 --max_write_buffer_number=8 --db=/data/m/rx --wal_dir=/data/m/rx --num=4000000000 --num_levels=8 --key_size=20 --value_size=400 --block_size=8192 --cache_size=225485783040 --cache_numshardbits=6 --compression_max_dict_bytes=0 --compression_ratio=0.5 --compression_type=lz4 --bytes_per_sync=8388608 --cache_index_and_filter_blocks=1 --cache_high_pri_pool_ratio=0.5 --benchmark_write_rate_limit=0 --write_buffer_size=16777216 --target_file_size_base=16777216 --max_bytes_for_level_base=67108864 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=20 --report_interval_seconds=5 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --subcompactions=4 --compaction_style=1 --universal_compression_size_percent=80 --pin_l0_filter_and_index_blocks_in_cache=1 --universal_min_merge_width=2 --universal_max_merge_width=20 --universal_size_ratio=1 --universal_max_size_amplification_percent=200 --universal_allow_trivial_move=1 --universal_compression_size_percent=-1 --use_existing_db=0 --sync=0 --threads=1 --memtablerep=vector --allow_concurrent_memtable_write=false --disable_wal=1 --seed=1649982947 --report_file=bm.uc.nt32.cm1.d0.sc4.tm/v5.1.4/benchmark_fillseq.wal_disabled.v400.log.r.csv
@mdcallag mdcallag added the performance Issues related to performance that may or may not be bugs label May 31, 2022
@siying
Copy link
Contributor

siying commented Jun 9, 2022

In current implementation, if universal compaction needs to schedule a compaction for L0->L0, we couldn't do trivial move. The basic idea for universal compaction is that each L0 file is treated as a sorted run, and one major goal of the algorithm is to limit total number of sorted runs. If we don't do trivial moves, it won't happen.

I think one work-around, which is actually the recommended configuration practice, is to use num_levels at least a few larger than level0_file_num_compaction_trigger. I saw you are using --level0_file_num_compaction_trigger=8 with --num_levels=8. It's almost there, but we probably need to set num_levels to 11 or 12 for this case.

@mdcallag
Copy link
Contributor Author

mdcallag commented Jun 9, 2022

Thanks. I will repeat the benchmarks.

@siying
Copy link
Contributor

siying commented Jun 10, 2022

I updated https://github.com/facebook/rocksdb/wiki/Universal-Compaction for trivial move condition.

@mdcallag
Copy link
Contributor Author

I will update tools/benchmark.sh to use num_levels=40

Repeated tests with num_levels = 8, 20 and 40.

Summary for cached workload:

  • larger value for num_levels decreases write-amp for fillseq
  • other changes weren't obvious, it didn't help or hurt that much

Summary for IO-bound workload:

  • larger value for num_levels greatly decreases (cut in half) write-amp for fillseq but also decreases throughput by ~20% for fillseq. With num_levels=8 there were some stalls, with larger there were no stalls. Perhaps a larger value for num_levels increases CPU for the foreground (user) thread
  • for overwrite throughput was also lower with the larger values for num_levels, but stalls and write-amp were reduced

For cached by RocksDB workload:

cached by RocksDB, universal compaction for RocksDB 6.28.2.
Results in order:
* --num_levels=8
* --num_levels=20
* --num_levels=40

ops_sec mb_sec  lsm_sz  blob_sz c_wgb   w_amp   c_mbps  c_wsecs c_csecs b_rgb   b_wgb   usec_op p50     p99     p99.9   p99.99  pmax    uptime  stall%  Nstall  u_cpu   s_cpu   rss     test    date    version job_id
1069697 428.5   9GB     0.0GB   9.9     1.2     507.6   30      30      0       0       0.9     0.5     2       6       59      5553    20      0.0     0       0.1     0.0     0.2     fillseq.wal_disabled.v400       2022-04-8T00:13:42      6.28
1082988 433.8   9GB     0.0GB   8.7     1.0     444.2   27      26      0       0       0.9     0.5     2       5       53      13846   20      0.0     0       0.1     0.0     0.1     fillseq.wal_disabled.v400       2022-06-10T13:10:29     6.28
1076386 431.1   9GB     0.0GB   8.6     1.0     441.1   27      26      0       0       0.9     0.5     2       5       59      22468   20      0.0     0       0.1     0.0     0.1     fillseq.wal_disabled.v400       2022-06-10T13:07:23     6.28

2802763 1122.6  16GB    0.0GB   0.0             0.0     0       0       0       0       11.4    9.8     27      34      50      33704   1781    0.0     0       48.5    6.4     16.8    readrandom.t32  2022-04-8T00:14:21      6.28
2894344 1159.3  16GB    0.0GB   0.0             0.0     0       0       0       0       11.1    9.5     26      34      49      33431   1781    0.0     0       47.5    7.0     16.8    readrandom.t32  2022-06-10T13:11:07     6.28
2873406 1150.9  16GB    0.0GB   0.0             0.0     0       0       0       0       11.1    9.5     27      34      50      39958   1781    0.0     0       48.0    6.7     16.8    readrandom.t32  2022-06-10T13:08:01     6.28

552048  2211.2  16GB    0.0GB   0.0             0.0     0       0       0       0       58.0    55.9    130     167     215     22097   1783    0.0     0       31.7    22.7    16.8    fwdrange.t32    2022-04-8T00:44:24      6.28
579130  2319.7  16GB    0.0GB   0.0             0.0     0       0       0       0       55.3    51.2    144     168     230     24327   1782    0.0     0       24.4    30.5    16.8    fwdrange.t32    2022-06-10T13:41:10     6.28
600546  2405.4  16GB    0.0GB   0.0             0.0     0       0       0       0       53.3    49.4    133     167     224     123996  1782    0.0     0       25.8    29.2    16.8    fwdrange.t32    2022-06-10T13:38:04     6.28

2793535 1118.9  16GB    0.0GB   0.0             0.0     0       0       0       0       11.5    117.2   169     229     347     10602   1781    0.0     0       48.1    6.6     16.8    multireadrandom.t32     2022-04-8T01:14:27      6.28
2898478 1161.0  16GB    0.0GB   0.0             0.0     0       0       0       0       11.0    107.2   169     194     314     109937  1781    0.0     0       47.3    7.2     16.8    multireadrandom.t32     2022-06-10T14:11:13     6.28
2865853 1147.9  16GB    0.0GB   0.0             0.0     0       0       0       0       11.2    108.9   169     213     356     40638   1781    0.0     0       47.7    7.0     16.8    multireadrandom.t32     2022-06-10T14:08:07     6.28

544345  218.0                                                                           58.3    55.1    109     504     815     116908                          0.2     0.1     0.0     overwritesome.t32.s0    2022-04-8T01:44:29      6.28
615862  246.7                                                                           51.6    47.4    107     475     890     19561                           0.2     0.0     0.0     overwritesome.t32.s0    2022-06-10T14:41:16     6.28
553440  221.7                                                                           57.5    55.8    109     529     2382    18529                           0.2     0.0     0.0     overwritesome.t32.s0    2022-06-10T14:38:10     6.28

490286  1963.8  19GB    0.0GB   29.6    8.3     17.0    54      97      0       0       65.3    64.0    126     192     254     136765  1784    0.0     1       52.7    3.6     49.0    revrangewhilewriting.t32        2022-04-8T01:44:38      6.28
458940  1838.3  21GB    0.0GB   20.2    5.7     11.6    40      66      0       0       69.7    67.8    144     204     283     109715  1784    0.0     0       51.9    4.2     39.8    revrangewhilewriting.t32        2022-06-10T14:41:24     6.28
467377  1872.0  20GB    0.0GB   19.2    5.4     11.1    36      65      0       0       68.5    66.9    110     165     232     238547  1784    0.0     0       52.6    3.6     38.7    revrangewhilewriting.t32        2022-06-10T14:38:19     6.28

533233  2135.8  23GB    0.0GB   19.3    5.4     11.1    38      65      0       0       60.0    59.8    108     156     206     86447   1784    0.0     0       50.1    5.3     41.3    fwdrangewhilewriting.t32        2022-04-8T02:14:45      6.28
512308  2052.0  24GB    0.0GB   18.5    5.2     10.6    35      62      0       0       62.5    62.1    108     133     168     139608  1784    0.0     0       50.4    5.1     41.2    fwdrangewhilewriting.t32        2022-06-10T15:11:30     6.28
510470  2044.7  24GB    0.0GB   17.8    5.0     10.2    34      60      0       0       62.7    62.3    108     144     186     68543   1784    0.0     0       50.5    5.0     40.4    fwdrangewhilewriting.t32        2022-06-10T15:08:24     6.28

1079195 432.3   24GB    0.0GB   25.2    7.1     14.4    49      84      0       0       29.6    24.6    104     162     231     69125   1784    0.0     0       36.3    18.1    48.6    readwhilewriting.t32    2022-04-8T02:44:51      6.28
1219330 488.4   26GB    0.0GB   24.4    6.8     14.0    40      83      0       0       26.2    23.2    74      107     161     108820  1784    0.0     0       40.1    13.9    49.3    readwhilewriting.t32    2022-06-10T15:41:36     6.28
1188389 476.0   26GB    0.0GB   24.7    6.9     14.2    41      85      0       0       26.9    23.8    75      110     168     38809   1784    0.0     0       40.2    13.8    49.8    readwhilewriting.t32    2022-06-10T15:38:30     6.28

439700  176.1   23GB    0.0GB   2562.5  8.2     1470.7  7590    8518    0       0       72.8    58.7    289     2706    3862    116522  1784    15.7    2184    43.7    12.9    15.2    overwrite.t32.s0        2022-04-8T03:14:58      6.28
447268  179.2   30GB    0.0GB   2760.5  8.6     1581.9  8113    9156    0       0       71.5    56.6    310     2792    4138    24207   1787    17.4    3383    43.5    12.8    16.6    overwrite.t32.s0        2022-06-10T16:11:42     6.28
445156  178.3   27GB    0.0GB   2577.6  8.1     1478.1  7460    8666    0       0       71.9    58.5    260     2684    3698    43684   1786    14.7    1956    44.3    12.9    15.5    overwrite.t32.s0        2022-06-10T16:08:37     6.28

For IO-bound workload

IO-bound, universal compaction for RocksDB 6.28.2.
Results in order:
* --num_levels=8
* --num_levels=20
* --num_levels=40

ops_sec mb_sec  lsm_sz  blob_sz c_wgb   w_amp   c_mbps  c_wsecs c_csecs b_rgb   b_wgb   usec_op p50     p99     p99.9   p99.99  pmax    uptime  stall%  Nstall  u_cpu   s_cpu   rss     test    date    version job_id
591389  236.9   878GB   0.0GB   1882.4  2.1     285.1   17599   17391   0       0       1.7     0.5     3       7       970     64202   6762    0.0     24      23.8    2.4     19.4    fillseq.wal_disabled.v400       2022-04-9T15:12:42      6.28
499658  200.1   878GB   0.0GB   959.1   1.1     122.7   9451    9099    0       0       2.0     0.5     3       6       1581    107002  8004    0.0     0       19.2    1.4     10.5    fillseq.wal_disabled.v400       2022-06-10T16:42:10     6.28
502225  201.2   878GB   0.0GB   989.3   1.1     127.2   9820    9455    0       0       2.0     0.5     3       6       981     147682  7964    0.0     0       19.6    1.5     10.7    fillseq.wal_disabled.v400       2022-06-10T16:39:00     6.28

288071  115.4   878GB   0.0GB   0.0             0.0     0       0       0       0       111.1   126.4   219     248     875     340114  1787    0.0     0       11.4    6.7     0.0     readrandom.t32  2022-04-9T17:05:32      6.28
289007  115.8   878GB   0.0GB   0.0             0.0     0       0       0       0       110.7   125.8   220     248     752     205368  1788    0.0     0       11.1    6.4     0.0     readrandom.t32  2022-06-10T18:55:42     6.28
289054  115.8   878GB   0.0GB   0.0             0.0     0       0       0       0       110.7   125.0   220     249     819     184032  1788    0.0     0       11.5    6.6     0.0     readrandom.t32  2022-06-10T18:51:52     6.28

173076  693.2   878GB   0.0GB   0.0             0.0     0       0       0       0       184.9   169.0   374     380     805     131688  1790    0.0     0       16.2    6.5     0.0     fwdrange.t32    2022-04-9T17:36:03      6.28
172957  692.8   878GB   0.0GB   0.0             0.0     0       0       0       0       185.0   168.3   374     380     576     133976  1791    0.0     0       16.2    6.1     0.0     fwdrange.t32    2022-06-10T19:26:14     6.28
174591  699.3   878GB   0.0GB   0.0             0.0     0       0       0       0       183.3   167.9   373     380     782     261739  1793    0.0     0       16.2    6.3     0.0     fwdrange.t32    2022-06-10T19:22:30     6.28

289342  115.9   878GB   0.0GB   0.0             0.0     0       0       0       0       110.6   1086.3  1797    1896    15082   154183  1786    0.0     0       11.1    6.7     0.0     multireadrandom.t32     2022-04-9T18:06:35      6.28
289495  116.0   878GB   0.0GB   0.0             0.0     0       0       0       0       110.5   1085.9  1794    1897    11916   254219  1788    0.0     0       10.9    6.4     0.0     multireadrandom.t32     2022-06-10T19:56:47     6.28
290691  116.4   878GB   0.0GB   0.0             0.0     0       0       0       0       110.1   1084.0  1785    1899    15995   217124  1790    0.0     0       11.2    6.6     0.0     multireadrandom.t32     2022-06-10T19:53:02     6.28

232441  93.1    1TB     0.0GB   2389.7  15.0    1431.9  7946    9046    0       0       137.6   59.0    948     26833   47217   76729   1709    14.0    3061    26.9    8.6     0.0     overwritesome.t32.s0    2022-04-9T18:37:06      6.28
198849  79.6    1TB     0.0GB   2263.3  14.1    1151.8  3898    7472    0       0       160.8   59.0    146     44690   72631   136335  2012    2.0     354     27.4    8.5     14.0    overwritesome.t32.s0    2022-06-10T20:27:21     6.28
207243  83.0    1TB     0.0GB   2341.0  14.6    1241.8  4050    7721    0       0       154.3   57.8    146     44319   73419   124703  1930    0.3     54      25.9    8.3     14.0    overwritesome.t32.s0    2022-06-10T20:23:32     6.28

81686   327.2   1TB     0.0GB   292.8   81.8    167.9   1211    1094    0       0       391.7   370.2   859     1890    6055    1113186 1786    0.0     0       20.9    6.6     0.0     revrangewhilewriting.t32        2022-04-9T19:06:00      6.28
84931   340.2   980GB   0.0GB   257.8   72.0    147.7   1193    2559    0       0       376.7   340.9   937     1887    9265    211906  1787    0.0     0       24.5    5.8     0.0     revrangewhilewriting.t32        2022-06-10T21:01:30     6.28
82521   330.5   1TB     0.0GB   305.4   85.5    175.1   341     1120    0       0       387.7   344.7   880     2164    9041    518353  1786    0.0     0       20.5    6.4     0.0     revrangewhilewriting.t32        2022-06-10T20:55:49     6.28

99585   398.9   1TB     0.0GB   211.0   59.1    121.0   714     680     0       0       321.3   316.7   784     1660    5461    315885  1785    0.0     0       17.9    6.8     0.0     fwdrangewhilewriting.t32        2022-04-9T19:36:34      6.28
113974  456.5   984GB   0.0GB   98.9    27.7    56.7    128     322     0       0       280.7   283.5   578     1227    5620    309351  1786    0.0     0       19.1    5.7     0.0     fwdrangewhilewriting.t32        2022-06-10T21:32:05     6.28
74786   299.6   1TB     0.0GB   781.8   218.9   448.0   889     3227    0       0       427.8   375.6   1397    2728    8513    875539  1787    0.0     0       17.2    8.0     0.0     fwdrangewhilewriting.t32        2022-06-10T21:26:22     6.28

226769  90.8    1TB     0.0GB   339.9   95.2    194.9   1401    1382    0       0       141.1   140.9   359     1033    2255    79909   1785    0.0     0       15.7    7.7     0.0     readwhilewriting.t32    2022-04-9T20:07:06      6.28
240723  96.4    987GB   0.0GB   43.1    12.1    24.7    62      162     0       0       132.9   139.1   303     427     1615    239042  1786    0.0     0       15.8    6.4     0.0     readwhilewriting.t32    2022-06-10T22:02:36     6.28
229878  92.1    1TB     0.0GB   207.3   58.0    118.8   245     847     0       0       139.2   140.4   351     858     2185    145867  1787    0.0     0       16.4    6.9     0.0     readwhilewriting.t32    2022-06-10T21:56:57     6.28

224366  89.9    1TB     0.0GB   2495.0  15.5    1423.8  8449    9934    0       0       142.6   58.7    945     28882   49557   90829   1794    14.6    3587    28.3    9.1     17.1    overwrite.t32.s0        2022-04-9T20:37:38      6.28
211246  84.6    1TB     0.0GB   2218.4  14.7    1269.4  4132    7262    0       0       151.5   59.1    137     41652   71848   104182  1790    0.8     147     24.8    8.0     13.6    overwrite.t32.s0        2022-06-10T22:33:09     6.28
204372  81.9    1TB     0.0GB   1962.8  13.4    1122.1  4315    7985    0       0       156.6   58.6    157     44074   87195   151145  1791    3.7     519     25.2    7.7     13.3    overwrite.t32.s0        2022-06-10T22:27:31     6.28

mdcallag added a commit to mdcallag/rocksdb-1 that referenced this issue Jun 13, 2022
Summary:
See facebook#10082 for more details. Trivial move
isn't done for universal when compaction is from L0 into L0. So a too small value for
num_levels with db_bench means there will be fewer trivial moves with universal and
that means that write-amp will increase.

Test Plan:

run it

Reviewers:

Subscribers:

Tasks:

Tags:
facebook-github-bot pushed a commit that referenced this issue Jun 13, 2022
Summary:
See #10082 for more details. Trivial move
isn't done for universal when compaction is from L0 into L0. So a too small value for
num_levels with db_bench means there will be fewer trivial moves with universal and
that means that write-amp will increase.

Pull Request resolved: #10158

Test Plan: run it

Reviewed By: siying

Differential Revision: D37122519

Pulled By: mdcallag

fbshipit-source-id: 1cb39049676f68a6cc3ea8d105a9965f89d4d09e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Issues related to performance that may or may not be bugs
Projects
None yet
Development

No branches or pull requests

2 participants