Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Take to account all new partitions when checking shard limits (backport #15843) #15852

Merged
merged 1 commit into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 4 additions & 1 deletion docs/appendices/release-notes/5.6.5.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,7 @@ See the :ref:`version_5.6.0` release notes for a full list of changes in the
Fixes
=====

None
- Fixed an issue that allowed ``INSERT INTO`` statements to create more shards
than allowed by :ref:`cluster.max_shards_per_node
<cluster.max_shards_per_node>` if the statement resulted in the creation of
many partitions at once.
6 changes: 5 additions & 1 deletion docs/appendices/release-notes/5.7.1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,8 @@ See the :ref:`version_5.7.0` release notes for a full list of changes in the
Fixes
=====

None
- Fixed an issue that allowed ``INSERT INTO`` statements to create more shards
than allowed by :ref:`cluster.max_shards_per_node
<cluster.max_shards_per_node>` if the statement resulted in the creation of
many partitions at once.

Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.Priority;
import org.elasticsearch.common.UUIDs;
import org.elasticsearch.common.ValidationException;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.inject.Singleton;
import org.elasticsearch.common.io.stream.StreamInput;
Expand Down Expand Up @@ -219,7 +220,17 @@ private ClusterState executeCreateIndices(ClusterState currentState, CreateParti
// Use only first index to validate that index can be created.
// All indices share same template/settings so no need to repeat validation for each index.
Settings commonIndexSettings = createCommonIndexSettings(currentState, template);
shardLimitValidator.validateShardLimit(commonIndexSettings, currentState);

final int numberOfShards = INDEX_NUMBER_OF_SHARDS_SETTING.get(commonIndexSettings);
final int numberOfReplicas = IndexMetadata.INDEX_NUMBER_OF_REPLICAS_SETTING.get(commonIndexSettings);
// Shard limit check has to take to account all new partitions.
final int numShardsToCreate = numberOfShards * (1 + numberOfReplicas) * indicesToCreate.size();
shardLimitValidator.checkShardLimit(numShardsToCreate, currentState)
.ifPresent(err -> {
final ValidationException e = new ValidationException();
e.addValidationError(err);
throw e;
});
int routingNumShards = IndexMetadata.INDEX_NUMBER_OF_ROUTING_SHARDS_SETTING.get(commonIndexSettings);
IndexMetadata.Builder tmpImdBuilder = IndexMetadata.builder(firstIndex)
.setRoutingNumShards(routingNumShards);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,14 @@ partitioned by (p)
.hasHTTPError(HttpResponseStatus.BAD_REQUEST, 4000)
.hasMessageContaining(
"this action would add [4] total shards, but this cluster currently has [0]/[2] maximum shards open;");

// Bulk insert forcing creation of multiple partitions at once.
execute("set global \"cluster.max_shards_per_node\" = 4");
Asserts.assertSQLError(() -> execute("insert into tbl (x, p) values (1, 1), (2, 2), (3, 3)"))
.hasPGError(PGErrorStatus.INTERNAL_ERROR)
.hasHTTPError(HttpResponseStatus.BAD_REQUEST, 4000)
.hasMessageContaining(
"this action would add [12] total shards, but this cluster currently has [0]/[8] maximum shards open");
}

@Test
Expand Down