Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Commits on Dec 3, 2012
  1. Merge branch '2.0.0'

    Gerrit authored
    * 2.0.0:
      MB-7285 Fix badmatch in index updater on cleanup
    
    Change-Id: Ia21c533ce4fd9b8e82c26971a5c195ff1d383836
  2. @fdmanana

    MB-7285 Fix badmatch in index updater on cleanup

    fdmanana authored Farshid Ghods committed
    By accident, the same variable name was used to match
    the count reduce of both the id tree and the view trees,
    causing a badmatch when those counts are different.
    
    Change-Id: I9cc836faa1919595dd6fb497629e6215927b6967
    Reviewed-on: http://review.couchbase.org/22947
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: buildbot <build@couchbase.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
Commits on Nov 17, 2012
  1. @fdmanana

    MB-7199 Don't buffer socket data on cleanup

    fdmanana authored Farshid Ghods committed
    If there's an error during view merging, or if we abort
    the merging because we reached limit=N rows, ensure that
    when pulling remaining data from the socket we don't
    accumulate that data, as it's not needed at all.
    
    Change-Id: Ifecb50e8df1277cbfe572b1c3a5dec9e5f66d930
    Reviewed-on: http://review.couchbase.org/22596
    Reviewed-by: Aliaksey Artamonau <aliaksiej.artamonau@gmail.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
Commits on Nov 15, 2012
  1. @fdmanana

    MB-7188 Prefix id btree keys with partition id

    fdmanana authored Farshid Ghods committed
    So that for incremental index updates, we get more btree
    locality for all insert/update/delete/lookup operations,
    covering a smaller area of the btree and therefore increasing
    performance and generating less fragmentation.
    
    Change-Id: Ie90b81e4617d9c74343d240e4741aff17c039bb4
    Reviewed-on: http://review.couchbase.org/22559
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
  2. @fdmanana

    MB-7131 Tweak btree node splits

    fdmanana authored Farshid Ghods committed
    Instead of creating several full nodes and one not
    completely full, make all nodes not more than half
    full.
    
    Experiments (including evperf datasets) showed that
    this is benefical for incremental index updates
    (random inserts/updates), as it makes fragmentation
    grows more slowly and speedups a little inserts/updates
    and btree lookups.
    
    For an evperf dataset, incrementally updating an index (ddoc
    with 4 views) that had 1 million items (per view) and a data
    size of 167Mb, with 1 million more new items (like it happens
    after a rebalance-out):
    
    ** Before this change
    
    Update duration: 26m50.270s
    Final file size: 12 513 Mb
    
    ** After this change
    
    Update duration: 25m43.893s
    Final file size: 11 640 Mb
    
    With a more comprehensive test, results follow.
    
    Before
    
    1> btree_bench:test(2000000, 500000, 1000, "/home/fdmanana/tmp/foo.couch").
    Creating btree with initial number of items 2000000, batch size 5000, random batches
    Btree created in 276553.336 ms
    Btree stats after creation:
        btree_size                               16684244
        file_size                                1317839559
        fragmentation                            98.73397001280942
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2000000
        kp_nodes                                 628
        kv_nodes                                 32865
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 107
        min_elements_per_kp_node                 1
        avg_elements_per_kp_node                 53.3312101910828
        elements_per_kp_node_90_percentile       96
        elements_per_kp_node_95_percentile       99
        elements_per_kp_node_99_percentile       106
        max_elements_per_kv_node                 162
        min_elements_per_kv_node                 2
        avg_elements_per_kv_node                 60.85501293169025
        elements_per_kv_node_90_percentile       131
        elements_per_kv_node_95_percentile       145
        elements_per_kv_node_99_percentile       159
        max_kp_node_size                         4602
        min_kp_node_size                         44
        avg_kp_node_size                         2294.2420382165606
        kp_node_size_90_percentile               4129
        kp_node_size_95_percentile               4258
        kp_node_size_99_percentile               4559
        max_compressed_kp_node_size              2038
        min_compressed_kp_node_size              43
        avg_compressed_kp_node_size              1034.7420382165606
        compressed_kp_node_size_90_percentile    1825
        compressed_kp_node_size_95_percentile    1882
        compressed_kp_node_size_99_percentile    2010
        max_kv_node_size                         4375
        min_kv_node_size                         55
        avg_kv_node_size                         1644.0853491556368
        kv_node_size_90_percentile               3538
        kv_node_size_95_percentile               3916
        kv_node_size_99_percentile               4294
        max_compressed_kv_node_size              1239
        min_compressed_kv_node_size              37
        avg_compressed_kv_node_size              479.60921953445916
        compressed_kv_node_size_90_percentile    991
        compressed_kv_node_size_95_percentile    1094
        compressed_kv_node_size_99_percentile    1193
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2000 keys)
    Percentiles for single key lookups, 90% -> 1.539ms, 95% -> 6.763ms, 99% -> 12.805ms
    
    Compacting btree
    Btree compaction took 57355.094 ms
    Btree stats after compaction:
        btree_size                               15269217
        file_size                                15269217
        fragmentation                            0.0
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2000000
        kp_nodes                                 123
        kv_nodes                                 12500
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 105
        min_elements_per_kp_node                 2
        avg_elements_per_kp_node                 102.6178861788618
        elements_per_kp_node_90_percentile       105
        elements_per_kp_node_95_percentile       105
        elements_per_kp_node_99_percentile       105
        max_elements_per_kv_node                 160
        min_elements_per_kv_node                 160
        avg_elements_per_kv_node                 160.0
        elements_per_kv_node_90_percentile       160
        elements_per_kv_node_95_percentile       160
        elements_per_kv_node_99_percentile       160
        max_kp_node_size                         4516
        min_kp_node_size                         87
        avg_kp_node_size                         4413.569105691057
        kp_node_size_90_percentile               4516
        kp_node_size_95_percentile               4516
        kp_node_size_99_percentile               4516
        max_compressed_kp_node_size              1686
        min_compressed_kp_node_size              72
        avg_compressed_kp_node_size              1347.2113821138212
        compressed_kp_node_size_90_percentile    1394
        compressed_kp_node_size_95_percentile    1401
        compressed_kp_node_size_99_percentile    1417
        max_kv_node_size                         4321
        min_kv_node_size                         4321
        avg_kv_node_size                         4321.0
        kv_node_size_90_percentile               4321
        kv_node_size_95_percentile               4321
        kv_node_size_99_percentile               4321
        max_compressed_kv_node_size              1228
        min_compressed_kv_node_size              1183
        avg_compressed_kv_node_size              1199.90384
        compressed_kv_node_size_90_percentile    1210
        compressed_kv_node_size_95_percentile    1211
        compressed_kv_node_size_99_percentile    1214
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2000 keys)
    Percentiles for single key lookups, 90% -> 1.395ms, 95% -> 1.567ms, 99% -> 1.88ms
    
    Starting incremental inserts of 500000 new items, in random batches of size 5000
    Incremental inserts took 36789.824 ms
    Btree stats after inserts:
        btree_size                               19447730
        file_size                                180187724
        fragmentation                            89.20696173508468
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2500000
        kp_nodes                                 320
        kv_nodes                                 20791
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 106
        min_elements_per_kp_node                 1
        avg_elements_per_kp_node                 65.96875
        elements_per_kp_node_90_percentile       105
        elements_per_kp_node_95_percentile       105
        elements_per_kp_node_99_percentile       105
        max_elements_per_kv_node                 162
        min_elements_per_kv_node                 2
        avg_elements_per_kv_node                 120.24433649175124
        elements_per_kv_node_90_percentile       160
        elements_per_kv_node_95_percentile       160
        elements_per_kv_node_99_percentile       160
        max_kp_node_size                         4559
        min_kp_node_size                         44
        avg_kp_node_size                         2837.65625
        kp_node_size_90_percentile               4516
        kp_node_size_95_percentile               4516
        kp_node_size_99_percentile               4516
        max_compressed_kp_node_size              1995
        min_compressed_kp_node_size              44
        avg_compressed_kp_node_size              1029.378125
        compressed_kp_node_size_90_percentile    1394
        compressed_kp_node_size_95_percentile    1457
        compressed_kp_node_size_99_percentile    1551
        max_kv_node_size                         4375
        min_kv_node_size                         55
        avg_kv_node_size                         3247.5970852772834
        kv_node_size_90_percentile               4321
        kv_node_size_95_percentile               4321
        kv_node_size_99_percentile               4321
        max_compressed_kv_node_size              1228
        min_compressed_kv_node_size              39
        avg_compressed_kv_node_size              911.1964792458275
        compressed_kv_node_size_90_percentile    1209
        compressed_kv_node_size_95_percentile    1210
        compressed_kv_node_size_99_percentile    1213
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2500 keys)
    Percentiles for single key lookups, 90% -> 1.47ms, 95% -> 1.603ms, 99% -> 4.404ms
    
    Starting incremental updates of every 5th item in the btree, in batches of size 5000 (500000 items), random order
    Incremental updates took 61793.899 ms
    Btree stats after updates:
        btree_size                               19447730
        file_size                                180187724
        fragmentation                            89.20696173508468
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2500000
        kp_nodes                                 320
        kv_nodes                                 20791
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 106
        min_elements_per_kp_node                 1
        avg_elements_per_kp_node                 65.96875
        elements_per_kp_node_90_percentile       105
        elements_per_kp_node_95_percentile       105
        elements_per_kp_node_99_percentile       105
        max_elements_per_kv_node                 162
        min_elements_per_kv_node                 2
        avg_elements_per_kv_node                 120.24433649175124
        elements_per_kv_node_90_percentile       160
        elements_per_kv_node_95_percentile       160
        elements_per_kv_node_99_percentile       160
        max_kp_node_size                         4559
        min_kp_node_size                         44
        avg_kp_node_size                         2837.65625
        kp_node_size_90_percentile               4516
        kp_node_size_95_percentile               4516
        kp_node_size_99_percentile               4516
        max_compressed_kp_node_size              1995
        min_compressed_kp_node_size              44
        avg_compressed_kp_node_size              1029.378125
        compressed_kp_node_size_90_percentile    1394
        compressed_kp_node_size_95_percentile    1457
        compressed_kp_node_size_99_percentile    1551
        max_kv_node_size                         4375
        min_kv_node_size                         55
        avg_kv_node_size                         3247.5970852772834
        kv_node_size_90_percentile               4321
        kv_node_size_95_percentile               4321
        kv_node_size_99_percentile               4321
        max_compressed_kv_node_size              1228
        min_compressed_kv_node_size              39
        avg_compressed_kv_node_size              911.1964792458275
        compressed_kv_node_size_90_percentile    1209
        compressed_kv_node_size_95_percentile    1210
        compressed_kv_node_size_99_percentile    1213
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2500 keys)
    Percentiles for single key lookups, 90% -> 1.053ms, 95% -> 1.232ms, 99% -> 4.041ms
    
    Compacting btree
    Btree compaction took 12494.651 ms
    Btree stats after compaction:
        btree_size                               19088176
        file_size                                19088176
        fragmentation                            0.0
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2500000
        kp_nodes                                 152
        kv_nodes                                 15625
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 105
        min_elements_per_kp_node                 2
        avg_elements_per_kp_node                 103.78947368421052
        elements_per_kp_node_90_percentile       105
        elements_per_kp_node_95_percentile       105
        elements_per_kp_node_99_percentile       105
        max_elements_per_kv_node                 160
        min_elements_per_kv_node                 160
        avg_elements_per_kv_node                 160.0
        elements_per_kv_node_90_percentile       160
        elements_per_kv_node_95_percentile       160
        elements_per_kv_node_99_percentile       160
        max_kp_node_size                         4516
        min_kp_node_size                         87
        avg_kp_node_size                         4463.9473684210525
        kp_node_size_90_percentile               4516
        kp_node_size_95_percentile               4516
        kp_node_size_99_percentile               4516
        max_compressed_kp_node_size              1686
        min_compressed_kp_node_size              75
        avg_compressed_kp_node_size              1361.5394736842106
        compressed_kp_node_size_90_percentile    1392
        compressed_kp_node_size_95_percentile    1400
        compressed_kp_node_size_99_percentile    1417
        max_kv_node_size                         4321
        min_kv_node_size                         4321
        avg_kv_node_size                         4321.0
        kv_node_size_90_percentile               4321
        kv_node_size_95_percentile               4321
        kv_node_size_99_percentile               4321
        max_compressed_kv_node_size              1228
        min_compressed_kv_node_size              1183
        avg_compressed_kv_node_size              1200.02208
        compressed_kv_node_size_90_percentile    1210
        compressed_kv_node_size_95_percentile    1211
        compressed_kv_node_size_99_percentile    1214
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2500 keys)
    Percentiles for single key lookups, 90% -> 1.018ms, 95% -> 1.169ms, 99% -> 1.498ms
    
    ok
    2>
    
    After
    
    1> btree_bench:test(2000000, 500000, 1000, "/home/fdmanana/tmp/foo.couch").
    Creating btree with initial number of items 2000000, batch size 5000, random batches
    Btree created in 268182.733 ms
    Btree stats after creation:
        btree_size                               16422278
        file_size                                1218231774
        fragmentation                            98.65195783343606
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2000000
        kp_nodes                                 662
        kv_nodes                                 28530
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 107
        min_elements_per_kp_node                 1
        avg_elements_per_kp_node                 44.095166163142
        elements_per_kp_node_90_percentile       78
        elements_per_kp_node_95_percentile       93
        elements_per_kp_node_99_percentile       102
        max_elements_per_kv_node                 162
        min_elements_per_kv_node                 1
        avg_elements_per_kv_node                 70.10164738871363
        elements_per_kv_node_90_percentile       136
        elements_per_kv_node_95_percentile       146
        elements_per_kv_node_99_percentile       158
        max_kp_node_size                         4602
        min_kp_node_size                         44
        avg_kp_node_size                         1897.0921450151056
        kp_node_size_90_percentile               3355
        kp_node_size_95_percentile               4000
        kp_node_size_99_percentile               4387
        max_compressed_kp_node_size              2225
        min_compressed_kp_node_size              43
        avg_compressed_kp_node_size              882.0135951661631
        compressed_kp_node_size_90_percentile    1529
        compressed_kp_node_size_95_percentile    1776
        compressed_kp_node_size_99_percentile    1977
        max_kv_node_size                         4375
        min_kv_node_size                         28
        avg_kv_node_size                         1893.7444794952683
        kv_node_size_90_percentile               3673
        kv_node_size_95_percentile               3943
        kv_node_size_99_percentile               4267
        max_compressed_kv_node_size              1234
        min_compressed_kv_node_size              25
        avg_compressed_kv_node_size              546.8209954433929
        compressed_kv_node_size_90_percentile    1028
        compressed_kv_node_size_95_percentile    1103
        compressed_kv_node_size_99_percentile    1189
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2000 keys)
    Percentiles for single key lookups, 90% -> 1.301ms, 95% -> 4.603ms, 99% -> 12.706ms
    
    Compacting btree
    Btree compaction took 62111.127 ms
    Btree stats after compaction:
        btree_size                               15269217
        file_size                                15269217
        fragmentation                            0.0
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2000000
        kp_nodes                                 123
        kv_nodes                                 12500
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 105
        min_elements_per_kp_node                 2
        avg_elements_per_kp_node                 102.6178861788618
        elements_per_kp_node_90_percentile       105
        elements_per_kp_node_95_percentile       105
        elements_per_kp_node_99_percentile       105
        max_elements_per_kv_node                 160
        min_elements_per_kv_node                 160
        avg_elements_per_kv_node                 160.0
        elements_per_kv_node_90_percentile       160
        elements_per_kv_node_95_percentile       160
        elements_per_kv_node_99_percentile       160
        max_kp_node_size                         4516
        min_kp_node_size                         87
        avg_kp_node_size                         4413.569105691057
        kp_node_size_90_percentile               4516
        kp_node_size_95_percentile               4516
        kp_node_size_99_percentile               4516
        max_compressed_kp_node_size              1686
        min_compressed_kp_node_size              72
        avg_compressed_kp_node_size              1347.2113821138212
        compressed_kp_node_size_90_percentile    1394
        compressed_kp_node_size_95_percentile    1401
        compressed_kp_node_size_99_percentile    1417
        max_kv_node_size                         4321
        min_kv_node_size                         4321
        avg_kv_node_size                         4321.0
        kv_node_size_90_percentile               4321
        kv_node_size_95_percentile               4321
        kv_node_size_99_percentile               4321
        max_compressed_kv_node_size              1228
        min_compressed_kv_node_size              1183
        avg_compressed_kv_node_size              1199.90384
        compressed_kv_node_size_90_percentile    1210
        compressed_kv_node_size_95_percentile    1211
        compressed_kv_node_size_99_percentile    1214
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2000 keys)
    Percentiles for single key lookups, 90% -> 1.324ms, 95% -> 1.45ms, 99% -> 1.849ms
    
    Starting incremental inserts of 500000 new items, in random batches of size 5000
    Incremental inserts took 36604.463 ms
    Btree stats after inserts:
        btree_size                               19431715
        file_size                                171302728
        fragmentation                            88.65650580882752
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2500000
        kp_nodes                                 279
        kv_nodes                                 20419
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 106
        min_elements_per_kp_node                 1
        avg_elements_per_kp_node                 74.18279569892474
        elements_per_kp_node_90_percentile       105
        elements_per_kp_node_95_percentile       105
        elements_per_kp_node_99_percentile       105
        max_elements_per_kv_node                 162
        min_elements_per_kv_node                 1
        avg_elements_per_kv_node                 122.43498702189137
        elements_per_kv_node_90_percentile       160
        elements_per_kv_node_95_percentile       160
        elements_per_kv_node_99_percentile       160
        max_kp_node_size                         4559
        min_kp_node_size                         44
        avg_kp_node_size                         3190.8602150537636
        kp_node_size_90_percentile               4516
        kp_node_size_95_percentile               4516
        kp_node_size_99_percentile               4516
        max_compressed_kp_node_size              2018
        min_compressed_kp_node_size              45
        avg_compressed_kp_node_size              1159.6810035842293
        compressed_kp_node_size_90_percentile    1592
        compressed_kp_node_size_95_percentile    1790
        compressed_kp_node_size_99_percentile    1923
        max_kv_node_size                         4375
        min_kv_node_size                         28
        avg_kv_node_size                         3306.744649591067
        kv_node_size_90_percentile               4321
        kv_node_size_95_percentile               4321
        kv_node_size_99_percentile               4321
        max_compressed_kv_node_size              1230
        min_compressed_kv_node_size              29
        avg_compressed_kv_node_size              927.4621675890103
        compressed_kv_node_size_90_percentile    1209
        compressed_kv_node_size_95_percentile    1210
        compressed_kv_node_size_99_percentile    1213
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2500 keys)
    Percentiles for single key lookups, 90% -> 1.115ms, 95% -> 1.264ms, 99% -> 3.103ms
    
    Starting incremental updates of every 5th item in the btree, in batches of size 5000 (500000 items), random order
    Incremental updates took 58408.701 ms
    Btree stats after updates:
        btree_size                               19431715
        file_size                                171302728
        fragmentation                            88.65650580882752
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2500000
        kp_nodes                                 279
        kv_nodes                                 20419
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 106
        min_elements_per_kp_node                 1
        avg_elements_per_kp_node                 74.18279569892474
        elements_per_kp_node_90_percentile       105
        elements_per_kp_node_95_percentile       105
        elements_per_kp_node_99_percentile       105
        max_elements_per_kv_node                 162
        min_elements_per_kv_node                 1
        avg_elements_per_kv_node                 122.43498702189137
        elements_per_kv_node_90_percentile       160
        elements_per_kv_node_95_percentile       160
        elements_per_kv_node_99_percentile       160
        max_kp_node_size                         4559
        min_kp_node_size                         44
        avg_kp_node_size                         3190.8602150537636
        kp_node_size_90_percentile               4516
        kp_node_size_95_percentile               4516
        kp_node_size_99_percentile               4516
        max_compressed_kp_node_size              2018
        min_compressed_kp_node_size              45
        avg_compressed_kp_node_size              1159.6810035842293
        compressed_kp_node_size_90_percentile    1592
        compressed_kp_node_size_95_percentile    1790
        compressed_kp_node_size_99_percentile    1923
        max_kv_node_size                         4375
        min_kv_node_size                         28
        avg_kv_node_size                         3306.744649591067
        kv_node_size_90_percentile               4321
        kv_node_size_95_percentile               4321
        kv_node_size_99_percentile               4321
        max_compressed_kv_node_size              1230
        min_compressed_kv_node_size              29
        avg_compressed_kv_node_size              927.4621675890103
        compressed_kv_node_size_90_percentile    1209
        compressed_kv_node_size_95_percentile    1210
        compressed_kv_node_size_99_percentile    1213
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2500 keys)
    Percentiles for single key lookups, 90% -> 1.433ms, 95% -> 1.599ms, 99% -> 3.0ms
    
    Compacting btree
    Btree compaction took 12399.026 ms
    Btree stats after compaction:
        btree_size                               19088176
        file_size                                19088176
        fragmentation                            0.0
        kv_chunk_threshold                       7168
        kp_chunk_threshold                       6144
        kv_count                                 2500000
        kp_nodes                                 152
        kv_nodes                                 15625
        max_depth                                4
        min_depth                                4
        avg_depth                                4.0
        depth_90_percentile                      4
        depth_95_percentile                      4
        depth_99_percentile                      4
        max_reduction_size                       8
        min_reduction_size                       8
        avg_reduction_size                       8.0
        reduction_size_90_percentile             8
        reduction_size_95_percentile             8
        reduction_size_99_percentile             8
        max_elements_per_kp_node                 105
        min_elements_per_kp_node                 2
        avg_elements_per_kp_node                 103.78947368421052
        elements_per_kp_node_90_percentile       105
        elements_per_kp_node_95_percentile       105
        elements_per_kp_node_99_percentile       105
        max_elements_per_kv_node                 160
        min_elements_per_kv_node                 160
        avg_elements_per_kv_node                 160.0
        elements_per_kv_node_90_percentile       160
        elements_per_kv_node_95_percentile       160
        elements_per_kv_node_99_percentile       160
        max_kp_node_size                         4516
        min_kp_node_size                         87
        avg_kp_node_size                         4463.9473684210525
        kp_node_size_90_percentile               4516
        kp_node_size_95_percentile               4516
        kp_node_size_99_percentile               4516
        max_compressed_kp_node_size              1686
        min_compressed_kp_node_size              75
        avg_compressed_kp_node_size              1361.5394736842106
        compressed_kp_node_size_90_percentile    1392
        compressed_kp_node_size_95_percentile    1400
        compressed_kp_node_size_99_percentile    1417
        max_kv_node_size                         4321
        min_kv_node_size                         4321
        avg_kv_node_size                         4321.0
        kv_node_size_90_percentile               4321
        kv_node_size_95_percentile               4321
        kv_node_size_99_percentile               4321
        max_compressed_kv_node_size              1228
        min_compressed_kv_node_size              1183
        avg_compressed_kv_node_size              1200.02208
        compressed_kv_node_size_90_percentile    1210
        compressed_kv_node_size_95_percentile    1211
        compressed_kv_node_size_99_percentile    1214
        max_key_size                             16
        min_key_size                             16
        avg_key_size                             16.0
        key_size_90_percentile                   16
        key_size_95_percentile                   16
        key_size_99_percentile                   16
        max_value_size                           6
        min_value_size                           6
        avg_value_size                           6.0
        value_size_90_percentile                 6
        value_size_95_percentile                 6
        value_size_99_percentile                 6
    
    Doing key lookups for every 1000th key (2500 keys)
    Percentiles for single key lookups, 90% -> 0.943ms, 95% -> 1.106ms, 99% -> 1.396ms
    
    ok
    
    Change-Id: Iee6f4b9a3974e7bca1adf37ff2f0efc384bf4b1c
    Reviewed-on: http://review.couchbase.org/22369
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Reviewed-by: Farshid Ghods <farshid@couchbase.com>
    Tested-by: Farshid Ghods <farshid@couchbase.com>
  3. @fdmanana

    MB-7130 Refactor btree stats

    fdmanana authored Farshid Ghods committed
    Move the code that computes the statistics into a
    separate module and make it easier to add new metrics.
    Also add 90%, 95% and 99% percentiles to all metrics.
    
    Example:
    
    $ curl -s http://localhost:9500/_set_view/default/_design/C/_btree_stats | json
    {
      "id_btree": {
        "btree_size": 73351175,
        "file_size": 13003147447,
        "fragmentation": 99.43589676807885,
        "kv_chunk_threshold": 7168,
        "kp_chunk_threshold": 6144,
        "kv_count": 1986453,
        "kp_nodes": 2589,
        "kv_nodes": 48685,
        "max_depth": 5,
        "min_depth": 5,
        "avg_depth": 5,
        "depth_90_percentile": 5,
        "depth_95_percentile": 5,
        "depth_99_percentile": 5,
        "max_reduction_size": 133,
        "min_reduction_size": 133,
        "avg_reduction_size": 133,
        "reduction_size_90_percentile": 133,
        "reduction_size_95_percentile": 133,
        "reduction_size_99_percentile": 133,
        "max_elements_per_kp_node": 32,
        "min_elements_per_kp_node": 2,
        "avg_elements_per_kp_node": 19.80417149478563,
        "elements_per_kp_node_90_percentile": 31,
        "elements_per_kp_node_95_percentile": 32,
        "elements_per_kp_node_99_percentile": 32,
        "max_elements_per_kv_node": 68,
        "min_elements_per_kv_node": 32,
        "avg_elements_per_kv_node": 40.80215672178289,
        "elements_per_kv_node_90_percentile": 48,
        "elements_per_kv_node_95_percentile": 51,
        "elements_per_kv_node_99_percentile": 62,
        "max_kp_node_size": 5377,
        "min_kp_node_size": 337,
        "avg_kp_node_size": 3328.100811123986,
        "kp_node_size_90_percentile": 5209,
        "kp_node_size_95_percentile": 5377,
        "kp_node_size_99_percentile": 5377,
        "max_compressed_kp_node_size": 4039,
        "min_compressed_kp_node_size": 264,
        "avg_compressed_kp_node_size": 2116.863267670915,
        "compressed_kp_node_size_90_percentile": 3207,
        "compressed_kp_node_size_95_percentile": 3264,
        "compressed_kp_node_size_99_percentile": 3396,
        "max_kv_node_size": 6055,
        "min_kv_node_size": 2931,
        "avg_kv_node_size": 3789.253918044572,
        "kv_node_size_90_percentile": 4419,
        "kv_node_size_95_percentile": 4701,
        "kv_node_size_99_percentile": 5829,
        "max_compressed_kv_node_size": 2312,
        "min_compressed_kv_node_size": 1047,
        "avg_compressed_kv_node_size": 1385.282078668994,
        "compressed_kv_node_size_90_percentile": 1608,
        "compressed_kv_node_size_95_percentile": 1711,
        "compressed_kv_node_size_99_percentile": 2076,
        "max_key_size": 16,
        "min_key_size": 16,
        "avg_key_size": 16,
        "key_size_90_percentile": 16,
        "key_size_95_percentile": 16,
        "key_size_99_percentile": 16,
        "max_value_size": 73,
        "min_value_size": 53,
        "avg_value_size": 71.84445290172987,
        "value_size_90_percentile": 73,
        "value_size_95_percentile": 73,
        "value_size_99_percentile": 73
      },
      "experts2": {
        "btree_size": 41358068,
        "file_size": 13003147447,
        "fragmentation": 99.68193802178608,
        "kv_chunk_threshold": 7168,
        "kp_chunk_threshold": 6144,
        "kv_count": 1986453,
        "kp_nodes": 1576,
        "kv_nodes": 32524,
        "max_depth": 5,
        "min_depth": 5,
        "avg_depth": 5,
        "depth_90_percentile": 5,
        "depth_95_percentile": 5,
        "depth_99_percentile": 5,
        "max_reduction_size": 133,
        "min_reduction_size": 133,
        "avg_reduction_size": 133,
        "reduction_size_90_percentile": 133,
        "reduction_size_95_percentile": 133,
        "reduction_size_99_percentile": 133,
        "max_elements_per_kp_node": 31,
        "min_elements_per_kp_node": 4,
        "avg_elements_per_kp_node": 21.63642131979696,
        "elements_per_kp_node_90_percentile": 30,
        "elements_per_kp_node_95_percentile": 31,
        "elements_per_kp_node_99_percentile": 31,
        "max_elements_per_kv_node": 123,
        "min_elements_per_kv_node": 1,
        "avg_elements_per_kv_node": 61.07652810232444,
        "elements_per_kv_node_90_percentile": 100,
        "elements_per_kv_node_95_percentile": 104,
        "elements_per_kv_node_99_percentile": 111,
        "max_kp_node_size": 5581,
        "min_kp_node_size": 719,
        "avg_kp_node_size": 3889.072335025381,
        "kp_node_size_90_percentile": 5368,
        "kp_node_size_95_percentile": 5546,
        "kp_node_size_99_percentile": 5580,
        "max_compressed_kp_node_size": 4954,
        "min_compressed_kp_node_size": 225,
        "avg_compressed_kp_node_size": 2479.73540609137,
        "compressed_kp_node_size_90_percentile": 3377,
        "compressed_kp_node_size_95_percentile": 3576,
        "compressed_kp_node_size_99_percentile": 4395,
        "max_kv_node_size": 5088,
        "min_kv_node_size": 38,
        "avg_kv_node_size": 2548.569979092363,
        "kv_node_size_90_percentile": 4171,
        "kv_node_size_95_percentile": 4324,
        "kv_node_size_99_percentile": 4639,
        "max_compressed_kv_node_size": 2275,
        "min_compressed_kv_node_size": 40,
        "avg_compressed_kv_node_size": 1142.756887221744,
        "compressed_kv_node_size_90_percentile": 1851,
        "compressed_kv_node_size_95_percentile": 1922,
        "compressed_kv_node_size_99_percentile": 2057,
        "max_key_size": 28,
        "min_key_size": 23,
        "avg_key_size": 27.71111322543247,
        "key_size_90_percentile": 28,
        "key_size_95_percentile": 28,
        "key_size_99_percentile": 28,
        "max_value_size": 9,
        "min_value_size": 9,
        "avg_value_size": 9,
        "value_size_90_percentile": 9,
        "value_size_95_percentile": 9,
        "value_size_99_percentile": 9
      },
      "category": {
        "btree_size": 97917630,
        "file_size": 13003147447,
        "fragmentation": 99.24696977867008,
        "kv_chunk_threshold": 7168,
        "kp_chunk_threshold": 6144,
        "kv_count": 1986453,
        "kp_nodes": 2345,
        "kv_nodes": 46057,
        "max_depth": 5,
        "min_depth": 5,
        "avg_depth": 5,
        "depth_90_percentile": 5,
        "depth_95_percentile": 5,
        "depth_99_percentile": 5,
        "max_reduction_size": 133,
        "min_reduction_size": 133,
        "avg_reduction_size": 133,
        "reduction_size_90_percentile": 133,
        "reduction_size_95_percentile": 133,
        "reduction_size_99_percentile": 133,
        "max_elements_per_kp_node": 30,
        "min_elements_per_kp_node": 4,
        "avg_elements_per_kp_node": 20.64008528784648,
        "elements_per_kp_node_90_percentile": 30,
        "elements_per_kp_node_95_percentile": 30,
        "elements_per_kp_node_99_percentile": 30,
        "max_elements_per_kv_node": 69,
        "min_elements_per_kv_node": 34,
        "avg_elements_per_kv_node": 43.1303167813796,
        "elements_per_kv_node_90_percentile": 50,
        "elements_per_kv_node_95_percentile": 53,
        "elements_per_kv_node_99_percentile": 64,
        "max_kp_node_size": 5551,
        "min_kp_node_size": 740,
        "avg_kp_node_size": 3813.549680170576,
        "kp_node_size_90_percentile": 5541,
        "kp_node_size_95_percentile": 5546,
        "kp_node_size_99_percentile": 5549,
        "max_compressed_kp_node_size": 5063,
        "min_compressed_kp_node_size": 246,
        "avg_compressed_kp_node_size": 2451.555223880597,
        "compressed_kp_node_size_90_percentile": 3443,
        "compressed_kp_node_size_95_percentile": 3487,
        "compressed_kp_node_size_99_percentile": 4355,
        "max_kv_node_size": 5989,
        "min_kv_node_size": 2948,
        "avg_kv_node_size": 3784.008098660356,
        "kv_node_size_90_percentile": 4394,
        "kv_node_size_95_percentile": 4653,
        "kv_node_size_99_percentile": 5629,
        "max_compressed_kv_node_size": 3182,
        "min_compressed_kv_node_size": 1526,
        "avg_compressed_kv_node_size": 1992.260590138307,
        "compressed_kv_node_size_90_percentile": 2299,
        "compressed_kv_node_size_95_percentile": 2425,
        "compressed_kv_node_size_99_percentile": 2945,
        "max_key_size": 33,
        "min_key_size": 28,
        "avg_key_size": 32.71111322543247,
        "key_size_90_percentile": 33,
        "key_size_95_percentile": 33,
        "key_size_99_percentile": 33,
        "max_value_size": 50,
        "min_value_size": 50,
        "avg_value_size": 50,
        "value_size_90_percentile": 50,
        "value_size_95_percentile": 50,
        "value_size_99_percentile": 50
      },
      "realm3": {
        "btree_size": 95096840,
        "file_size": 13003147447,
        "fragmentation": 99.2686629111328,
        "kv_chunk_threshold": 7168,
        "kp_chunk_threshold": 6144,
        "kv_count": 1986453,
        "kp_nodes": 2401,
        "kv_nodes": 45082,
        "max_depth": 5,
        "min_depth": 5,
        "avg_depth": 5,
        "depth_90_percentile": 5,
        "depth_95_percentile": 5,
        "depth_99_percentile": 5,
        "max_reduction_size": 133,
        "min_reduction_size": 133,
        "avg_reduction_size": 133,
        "reduction_size_90_percentile": 133,
        "reduction_size_95_percentile": 133,
        "reduction_size_99_percentile": 133,
        "max_elements_per_kp_node": 30,
        "min_elements_per_kp_node": 4,
        "avg_elements_per_kp_node": 19.7759266972095,
        "elements_per_kp_node_90_percentile": 30,
        "elements_per_kp_node_95_percentile": 30,
        "elements_per_kp_node_99_percentile": 30,
        "max_elements_per_kv_node": 70,
        "min_elements_per_kv_node": 35,
        "avg_elements_per_kv_node": 44.06310722683111,
        "elements_per_kv_node_90_percentile": 51,
        "elements_per_kv_node_95_percentile": 54,
        "elements_per_kv_node_99_percentile": 65,
        "max_kp_node_size": 5491,
        "min_kp_node_size": 732,
        "avg_kp_node_size": 3614.363182007497,
        "kp_node_size_90_percentile": 5468,
        "kp_node_size_95_percentile": 5486,
        "kp_node_size_99_percentile": 5490,
        "max_compressed_kp_node_size": 4847,
        "min_compressed_kp_node_size": 239,
        "avg_compressed_kp_node_size": 2353.94543940025,
        "compressed_kp_node_size_90_percentile": 3425,
        "compressed_kp_node_size_95_percentile": 3476,
        "compressed_kp_node_size_99_percentile": 4476,
        "max_kv_node_size": 5951,
        "min_kv_node_size": 2963,
        "avg_kv_node_size": 3777.697972583293,
        "kv_node_size_90_percentile": 4380,
        "kv_node_size_95_percentile": 4634,
        "kv_node_size_99_percentile": 5578,
        "max_compressed_kv_node_size": 3169,
        "min_compressed_kv_node_size": 1513,
        "avg_compressed_kv_node_size": 1975.109467193115,
        "compressed_kv_node_size_90_percentile": 2280,
        "compressed_kv_node_size_95_percentile": 2401,
        "compressed_kv_node_size_99_percentile": 2882,
        "max_key_size": 31,
        "min_key_size": 26,
        "avg_key_size": 30.71111322543247,
        "key_size_90_percentile": 31,
        "key_size_95_percentile": 31,
        "key_size_99_percentile": 31,
        "max_value_size": 50,
        "min_value_size": 50,
        "avg_value_size": 50,
        "value_size_90_percentile": 50,
        "value_size_95_percentile": 50,
        "value_size_99_percentile": 50
      },
      "realm2": {
        "btree_size": 52542690,
        "file_size": 13003147447,
        "fragmentation": 99.59592329307839,
        "kv_chunk_threshold": 7168,
        "kp_chunk_threshold": 6144,
        "kv_count": 1986453,
        "kp_nodes": 1545,
        "kv_nodes": 30876,
        "max_depth": 5,
        "min_depth": 5,
        "avg_depth": 5,
        "depth_90_percentile": 5,
        "depth_95_percentile": 5,
        "depth_99_percentile": 5,
        "max_reduction_size": 133,
        "min_reduction_size": 133,
        "avg_reduction_size": 133,
        "reduction_size_90_percentile": 133,
        "reduction_size_95_percentile": 133,
        "reduction_size_99_percentile": 133,
        "max_elements_per_kp_node": 30,
        "min_elements_per_kp_node": 4,
        "avg_elements_per_kp_node": 20.98381877022654,
        "elements_per_kp_node_90_percentile": 26,
        "elements_per_kp_node_95_percentile": 28,
        "elements_per_kp_node_99_percentile": 30,
        "max_elements_per_kv_node": 118,
        "min_elements_per_kv_node": 1,
        "avg_elements_per_kv_node": 64.336474931986,
        "elements_per_kv_node_90_percentile": 94,
        "elements_per_kv_node_95_percentile": 98,
        "elements_per_kv_node_99_percentile": 105,
        "max_kp_node_size": 5489,
        "min_kp_node_size": 732,
        "avg_kp_node_size": 3834.906796116505,
        "kp_node_size_90_percentile": 4759,
        "kp_node_size_95_percentile": 5119,
        "kp_node_size_99_percentile": 5477,
        "max_compressed_kp_node_size": 5220,
        "min_compressed_kp_node_size": 241,
        "avg_compressed_kp_node_size": 2607.493851132686,
        "compressed_kp_node_size_90_percentile": 3218,
        "compressed_kp_node_size_95_percentile": 3375,
        "compressed_kp_node_size_99_percentile": 4833,
        "max_kv_node_size": 5169,
        "min_kv_node_size": 42,
        "avg_kv_node_size": 2877.555415209224,
        "kv_node_size_90_percentile": 4218,
        "kv_node_size_95_percentile": 4372,
        "kv_node_size_99_percentile": 4675,
        "max_compressed_kv_node_size": 2834,
        "min_compressed_kv_node_size": 44,
        "avg_compressed_kv_node_size": 1562.440082912294,
        "compressed_kv_node_size_90_percentile": 2275,
        "compressed_kv_node_size_95_percentile": 2363,
        "compressed_kv_node_size_99_percentile": 2525,
        "max_key_size": 31,
        "min_key_size": 26,
        "avg_key_size": 30.71111322543247,
        "key_size_90_percentile": 31,
        "key_size_95_percentile": 31,
        "key_size_99_percentile": 31,
        "max_value_size": 9,
        "min_value_size": 9,
        "avg_value_size": 9,
        "value_size_90_percentile": 9,
        "value_size_95_percentile": 9,
        "value_size_99_percentile": 9
      }
    }
    
    Change-Id: I353f68901475eb07a390e59793e7af0b191a50b7
    Reviewed-on: http://review.couchbase.org/22368
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Reviewed-by: Farshid Ghods <farshid@couchbase.com>
    Tested-by: Farshid Ghods <farshid@couchbase.com>
Commits on Nov 8, 2012
  1. @fdmanana

    Revert "MB-6107 Remove no longer needed code"

    fdmanana authored Farshid Ghods committed
    This reverts commit c148470.
    
    It's still needed. While the compactor is running, the updater
    might be transferring the replicas partitions (happens right
    after failover) and finish before the compactor finishes. Later,
    if the compactor finishes, we accept a snapshot where the replica
    partitions data is not signaled as transferred. This will self
    heal over time however, but it make take some considerable amount
    of time to heal.
    
    Change-Id: Id0dcf0a6dcfd4904e3e9d1283e229f59047639b2
    Reviewed-on: http://review.couchbase.org/22312
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
  2. @fdmanana

    MB-7117 Fix validation of skip parameter (must be non-negative)

    fdmanana authored Farshid Ghods committed
    Change-Id: Iec235ce9372a4dcb67cc8f820981477ef7e89a53
    Reviewed-on: http://review.couchbase.org/22336
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
Commits on Nov 6, 2012
  1. @Damienkatz @steveyen

    MB-7046: Fix file open retry interval to never be infinity

    Damienkatz authored steveyen committed
    We were reusing the "file close on non-use" interval for the retry on eacces
    or emfile errors, but sometimes the interval was infinity, which would mean
    we would never retry. Instead use an independent interval that is never
    infinity.
    
    Change-Id: I3e092fe7fee7b94c1bd095ac7483334952544b5c
    Reviewed-on: http://review.couchbase.org/22285
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Filipe David Borba Manana <fdmanana@gmail.com>
  2. @fdmanana

    MB-7106 Log partition monitor requests/replies

    fdmanana authored Farshid Ghods committed
    Log, with debug level, whenever a partition update monitor
    is requested and when a reply is sent back to the caller.
    This allows us to know how much a caller waited for a
    particular partition to be fully indexed.
    
    Change-Id: I552a95ba3e8f649fe3f86d932fa99635bd63a490
    Reviewed-on: http://review.couchbase.org/22309
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
Commits on Nov 3, 2012
  1. @fdmanana

    MB-7082 Revert "MB-6107 Remove no longer needed and wrong logic"

    fdmanana authored Farshid Ghods committed
    This reverts commit 29ea390.
    
    Turns out that for some periods of time, the same partition can be
    marked as active in 2 nodes. Therefore for one of them, the group
    snapshot bitmasks have to be manipulated, so that we get results
    for that partition from only one node.
    Due to lack of full atomicity, ns_server passes a list of wanted
    partitions for each node which deals with this case. These lists
    of wanted partitions must be strictly respected.
    
    Change-Id: Ie6cc2464ff13b196c4bbefda90693d34d89f0bb2
    Reviewed-on: http://review.couchbase.org/22251
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
  2. @fdmanana

    MB-6107 Don't write more than 1 header on index compaction

    fdmanana authored Farshid Ghods committed
    No matter how many retries happen, only write one header to
    the compacted index file. This reduces final file fragmentation
    (headers are big, and always written at 4K file boundaries) and
    helps index compaction catch up with concurrent index updates
    more quickly.
    
    Change-Id: I37a3fd55e7ef54cf08b9c383db14e1d1f2654736
    Reviewed-on: http://review.couchbase.org/22195
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
Commits on Nov 2, 2012
  1. @Damienkatz @steveyen

    MB-6945: Increase size of rev seq from 32 bits to 48 bits

    Damienkatz authored steveyen committed
    Also, bump file version so we don't open old files with 32 bit format.
    
    Change-Id: Ifecaa99e6e7f6fd543d2ba0c88f5add0301508fd
    Reviewed-on: http://review.couchbase.org/22098
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Aaron Miller <apage43@ninjawhale.com>
    Reviewed-by: Steve Yen <steve.yen@gmail.com>
    Tested-by: Steve Yen <steve.yen@gmail.com>
Commits on Nov 1, 2012
  1. @fdmanana

    MB-7055 Start reduce context when getting row count

    fdmanana authored Farshid Ghods committed
    If a view has a custom reduce function (JavaScript), is empty
    and is queried with ?reduce=false, then its reduce function
    is called against an empty list. This call failed because no
    reduce context was started.
    
    Change-Id: I7ff84125c3cee6a38baa7e84b4dc980df3e8c3c6
    Reviewed-on: http://review.couchbase.org/22107
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
Commits on Oct 31, 2012
  1. @fdmanana @steveyen

    MB-6107 Remove couch_file:sync/1 call after index compaction

    fdmanana authored steveyen committed
    Not really worth the cost, and after a crash/node restart,
    ns_server always attempts to correctly connfigure indexes
    (as of MB-6310).
    
    Change-Id: I13b39a70ea343641027bfe96c6ebffe5ed5888a5
    Reviewed-on: http://review.couchbase.org/22037
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  2. @fdmanana @steveyen

    MB-6107 Remove unnecessary tuple construction

    fdmanana authored steveyen committed
    Leaf nodes are always of type kv_node.
    
    Change-Id: I5b3cfab90a3cb596e80aba0958b14acc2c2fd418
    Reviewed-on: http://review.couchbase.org/22036
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  3. @fdmanana @steveyen

    MB-6107 Remove no longer needed and wrong logic

    fdmanana authored steveyen committed
    No need to modify the group snapshot bitmasks. The desired
    set of indexable partitions must match the group's active
    bitmask only.
    
    Change-Id: I7dce56632742127e25b986f3cb30f80163f5ab8e
    Reviewed-on: http://review.couchbase.org/22035
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  4. @fdmanana @steveyen

    MB-6107 Remove no longer needed code

    fdmanana authored steveyen committed
    If the set of replicas on transfer is modified, the compactor
    is restarted, so there's no point of checking if the compactor's
    group has an up to date set of replicas on transfer.
    
    Change-Id: Ifd97c5ba79ee2ae6699939bd739378cd455152aa
    Reviewed-on: http://review.couchbase.org/22034
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  5. @fdmanana @steveyen

    MB-7039 Avoid crash if replica index file is missing

    fdmanana authored steveyen committed
    When opening a previously created and configured index, if
    for some reason the replica index file is missing (or doesn't
    have an header, due to a crash), we reach a situation where
    opening the index becomes impossible, unless we explicitly
    delete the main index file.
    
    The solution here is to configure the replica index, if it's
    missing, when opening the main view group.
    
    Haven't observed this happening in practice, but it's a possibility, therefore good to address for extra reliability.
    
    Change-Id: I42fe0276770a4469a83a94e5e0f88bbbd5370d99
    Reviewed-on: http://review.couchbase.org/22038
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Damien Katz <damien@couchbase.com>
Commits on Oct 29, 2012
  1. @fdmanana @steveyen

    MB-6957 Retry file operations on Windows

    fdmanana authored steveyen committed
    Due to the fact that some operations provided by the
    Erlang 'file' module open files without all the share
    flags, and that there may be external processes opening
    files without those flags as well (Windows services,
    antivirus software, etc), it makes some concurrent
    operations against the same file fail with a Windows
    share violation error, which Erlang file driver maps
    to posix error 'eacces'.
    
    When this happens, just retry the failed operations for
    a limited period of time, after which we give up.
    
    Change-Id: Iaecc6d520169d8b84bfcb354066e42b533c435cd
    Reviewed-on: http://review.couchbase.org/22042
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Tested-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
Commits on Oct 26, 2012
  1. @fdmanana @steveyen

    MB-7030 Avoid unnecessary group pid lookup

    fdmanana authored steveyen committed
    When list of partitions to mark as indexable or unindexable is
    empty, just be a complete no-op, that is, don't look up the
    design document in the cache (or disk), and don't compute its
    signature to get a group pid.
    
    Change-Id: If219aba61846ec6139e19055cbab0e1ad8f51764
    Reviewed-on: http://review.couchbase.org/22012
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
  2. @fdmanana @steveyen

    MB-7030 Don't restart cleanup process

    fdmanana authored steveyen committed
    When doing the transition of partitions between the indexable
    and unindexable states, don't restart the cleanup process.
    When this process finishes, update its header's list of
    indexable and unindexable partitions.
    
    This saves some CPU, IO and can save up to 5 seconds per state
    transition request (one which asks to toggle partitions between
    the indexable and unindexable states only).
    
    Change-Id: Ida8dd5b2ade1d039de363a88e4964c130cc17046
    Reviewed-on: http://review.couchbase.org/22011
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
  3. @fdmanana @steveyen

    MB-7030 Don't fsync when doing some header commits

    fdmanana authored steveyen committed
    Fsync after writing an header only if the new cleanup bitmask
    differs from the old cleanup bitmask. This is safe nowadays
    as ns_server on startup configures view groups (MB-6310), and
    sets the correct vbucket states. The fsync when cleanup bitmask
    changes is necessary because otherwise on restart after a crash,
    the view group might attempt to open a vbucket database that was
    deleted right before the crash but before the header hit the disk.
    
    Change-Id: I6b70cb7fae55f6a856c56aae8659e8072a64b351
    Reviewed-on: http://review.couchbase.org/22010
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
  4. @fdmanana @steveyen

    MB-7030 Avoid not useful anymore header commit

    fdmanana authored steveyen committed
    When toggling partition between the indexable and unindexable
    states, don't build and write an header (and fsync it). This is
    doesn't compromise reliability anymore, as if a crash happens,
    there's no important state information loss - on restart ns_server
    always attempts to configure indexes correctly (since MB-6310).
    
    Change-Id: I992d3543ceef233b2694489fbc981511e4922df0
    Reviewed-on: http://review.couchbase.org/22009
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
  5. @fdmanana @steveyen

    MB-7030 Check mail box for new group snapshot

    fdmanana authored steveyen committed
    When stopping the updater, check if there's currently
    any new group snapshot sent by him to us. If so, process
    it.
    Before this change it was ignored, meaning that the next
    time the updater was restarted, it would repeat some work,
    wasting CPU and IO.
    
    Change-Id: I840bc797567a1d7c81f9970aaf4e1b9d20271b1e
    Reviewed-on: http://review.couchbase.org/22008
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  6. @fdmanana @steveyen

    MB-7030 Use some cheaper BIFs for list and orddicts

    fdmanana authored steveyen committed
    lists:member/2 is cheaper than ordsets:is_element/2, and
    lists:keyfind/3 is cheaper than orddict:is_key/2 and
    orddict:fetch/2.
    
    Change-Id: I657f644d7b675097cec46cfbabb0aba4ff02fcd7
    Reviewed-on: http://review.couchbase.org/22007
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  7. @fdmanana @steveyen

    MB-7030 Always send new group snapshot to parent

    fdmanana authored steveyen committed
    Instead of sending it with a minimum periodicity of 5 seconds,
    every time a batch of changes is applied to the btrees, send the
    new view group snapshot to the parent. This prevents discarding
    some indexing progress when there's a quick succession of partition
    state transitions, as it happens during rebalance.
    
    This restriction was added in the past with the goal of reducing
    memory copying (group snapshots were structures with much more
    fat).
    
    Change-Id: I4a56b1eaf1019e3203dcaf4e23422145a0b64d21
    Reviewed-on: http://review.couchbase.org/22006
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  8. @fdmanana @steveyen

    MB-6107 Make index btree chunk thresholds configurable

    fdmanana authored steveyen committed
    Allow via couch_config to change the thresholds for the
    btrees created for views. The purpose is just to allow
    for evperf team to easily do tests with different values
    for these thresholds.
    
    Change-Id: I0e68127159b38081dd64391d4dc2b43004233113
    Reviewed-on: http://review.couchbase.org/21968
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  9. @fdmanana @steveyen

    MB-6107 More efficient index compaction retry phase

    fdmanana authored steveyen committed
    Ensure that during the index compaction retry phase, we
    don't insert/remove many small batches into the btrees.
    Instead do less batch operations but with larger batches,
    reducing IO and causing less fragmentation.
    
    Change-Id: Idc9a8059c2e74f4f3cfe19a7be98e3a9d556500d
    Reviewed-on: http://review.couchbase.org/21965
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  10. @fdmanana @steveyen

    MB-6107 Do less fsyncs on index compaction

    fdmanana authored steveyen committed
    When reaching the retry phase, there's really no need to
    do an fsync for each retry iteration. Instead just do a
    single one at the end.
    
    Also change the order of some flush calls (without affecting
    correctness) just to increase parallelism.
    
    Change-Id: I7021cde3aa54b5c9c7a04b0513301bce6c30ee8d
    Reviewed-on: http://review.couchbase.org/21964
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Tested-by: Damien Katz <damien@couchbase.com>
  11. @aartamonau @steveyen

    MB-7025 Shutdown databases in couch_server:terminate correctly.

    aartamonau authored steveyen committed
    Since recently couch_dbs_by_name contains only PID and nothing
    more.
    
    It's not clear though if this actually fixes referred bug.
    
    Change-Id: Ifb3a0bd9a9c011c4836e0513a4adbaba94c3c8c1
    Reviewed-on: http://review.couchbase.org/22021
    Reviewed-by: Aliaksey Kandratsenka <alkondratenko@gmail.com>
    Tested-by: Aliaksey Kandratsenka <alkondratenko@gmail.com>
    Reviewed-by: Filipe David Borba Manana <fdmanana@gmail.com>
Commits on Oct 25, 2012
  1. @aartamonau

    MB-6995 Stop related apps when couchdb application is stopped.

    aartamonau authored Farshid Ghods committed
    When ns_server needs to restart couchdb, we want it to restart not
    only couch app but also all the related apps.
    
    Change-Id: Ic8e1d316fd1c2e765c7783e5009cbf9448cc9b13
    Reviewed-on: http://review.couchbase.org/21947
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Tested-by: Aliaksey Kandratsenka <alkondratenko@gmail.com>
  2. @aartamonau

    MB-6995 Don't restart couch_{server,set_view} on dir changes.

    aartamonau authored Farshid Ghods committed
    If some external (meaning ns_server) process calls anything from
    couchdb at the time of restart, it may cause this process to terminate
    in the hard way with reached_max_restart_intensity error. To avoid
    this we're going to stop couchdb from ns_server in coordination with
    all the other ns_server processes.
    
    Change-Id: Ifea0c6e3bbebbdd36efedd4fe10bb2a7ccb27892
    Reviewed-on: http://review.couchbase.org/21946
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Reviewed-by: Aliaksey Kandratsenka <alkondratenko@gmail.com>
    Reviewed-by: Damien Katz <damien@couchbase.com>
    Reviewed-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Tested-by: Aliaksey Kandratsenka <alkondratenko@gmail.com>
  3. @fdmanana

    MB-6947 Don't log set_view_outdated errors in httpd layer

    fdmanana authored Farshid Ghods committed
    These are already logged by the index/view merger, with a
    log level of 'info' and with more useful details.
    
    Change-Id: Id919d4ffedc60f58bf708f209689d0ab7eb2408a
    Reviewed-on: http://review.couchbase.org/21963
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Farshid Ghods <farshid@couchbase.com>
    Tested-by: Farshid Ghods <farshid@couchbase.com>
  4. @fdmanana @steveyen

    MB-6990 Ignore missing module during make check

    fdmanana authored steveyen committed
    During make check, or when running CouchDB standalone for
    testing purpose, the spatial modules (GeoCouch) are not in
    the Erlang path, whence just ignore the function call in
    this case.
    
    Change-Id: Ia02b37eef52960852189c2dc2b689198cecf89dc
    Reviewed-on: http://review.couchbase.org/21962
    Tested-by: Filipe David Borba Manana <fdmanana@gmail.com>
    Reviewed-by: Volker Mische <volker.mische@gmail.com>
    Tested-by: Volker Mische <volker.mische@gmail.com>
Something went wrong with that request. Please try again.