Fix splitting shards with large purge sequences #3739
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Previously, if the source db purge sequence >
purge_infos_limit
, shard splitting would crash with the{{invalid_start_purge_seq,0}, [{couch_bt_engine,fold_purge_infos,5...
error. That was because purge sequences were always copied starting from 0. That would only work as long as the total number of purges stayed below the purge_infos_limit threshold. In order to correctly gather the purge sequences, the start sequence must be based off of the actual oldest sequence currently available.An example of how it should be done is in the
mem_rpc
module, when loading purge infos, so here we do exactly the same. TheMinSeq - 1
logic is also evident by inspecting the fold_purge_infos function.The test sets up the exact scenario as described above: reduces the purge info limit to 10 then purges 20 documents. By purging more than the limit, we ensure the starting sequence is now != 0. However, the purge sequence btree is actually trimmed down during compaction. That is why there are a few extra helper functions to ensure compaction runs and finishes before shard splitting starts.
Fixes: #3738