Take to account all new partitions when checking shard limits (backport #15843) #15852
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Takes a fix from #15813 but with a simpler test.
It's possible to significantly exceed the limit:
values_count/nodes
times.If initial setup was fine for a single partition, the whole insert passed as we took to account only partition.
This is not a regression caused by #13843.
I also checked in 5.2, same problem there. In 5.2 we used to create "probe" index multiple times via
indicesService.createIndex
but it doesn't affectint currentOpenShards = state.metadata().getTotalOpenIndexShards()
(probably because we call
allocationService.reroute
only once after the loop)This is not a fix for #15803 (which is about expanding replicas, or something else but not related to partitioned tables. Maybe even expected and need to expand docs. I will have a dedicated PR for it, out of the scope of this PR).
insert from select
is also slightly affected but I couldn't significantly exceed the limit.BulkShardCreationLimiter
leads to creating new partitions in smaller batches, so creating a batch actually increasesstate.metadata().getTotalOpenIndexShards()
and suchShardLimitValidator
kinda keeps up with the actual state and cannot significantly exceed the limit. In other words, it doesn't take to account nottotal_new_partitions - 1
but "only"new_partitions_batch_size - 1
Anyway, this change also addresses
insert from select
as well, even though it's less visible problem thatinsert-from-values
This is an automatic backport of pull request #15843 done by [Mergify](https://mergify.com).