Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Enforce docs ids _changes filter optimization limit
It turns out `changes_doc_ids_optimization_threshold` limit has never been applied for the clustered changes feeds. So it was effectively unlimited. This commit enables it, and also adds tests to ensure the limit works. Since we didn't have a good Erlang integration test suite for clustered changes feeds, which allowed this case to slip through the cracks, add a few more tests along the way to test the majority of parameter combinations which might interact: sharding (single shards vs multiple), continuous vs normal, reverse, row limits etc. The previous limit was 100, but since it was never actually applied it's equivalent not having one at all, so let's pick a new one. I chose 1000 noticing that at Cloudant, with close to 3000 we had fabric timeouts on a busy cluster, so that seemed too high. And 1000 seemed about the ballpark of the what size of _bulk_get batch might be. Adding a benchmarking eunit test https://gist.github.com/nickva/a21ef04b7e4bdbed5fdeb708f1d613b5 showed about 50-75 msec to query batches of 1000 random (uuid) doc_ids for Q values 1 through 8.
- Loading branch information