Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 19 additions & 13 deletions docs/reference/how-to/size-your-shards.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -546,34 +546,40 @@ PUT _cluster/settings
----

[discrete]
[[troubleshooting-max-docs-limit]]
==== Number of documents in the shard cannot exceed [2147483519]


Elasticsearch shards reflect Lucene's underlying https://github.com/apache/lucene/issues/5176[index
`MAX_DOC` hard limit] of 2,147,483,519 (`(2^31)-129`) docs. This figure is
the sum of `docs.count` plus `docs.deleted` as reported by the <<indices-stats,Index stats API>>
per shard. Exceeding this limit will result in errors like the following:
Each {es} shard is a separate Lucene index, so it shares Lucene's
https://github.com/apache/lucene/issues/5176[`MAX_DOC` limit] of having at most
2,147,483,519 (`(2^31)-129`) documents. This per-shard limit applies to the sum
of `docs.count` plus `docs.deleted` as reported by the <<indices-stats,Index
stats API>>. Exceeding this limit will result in errors like the following:

[source,txt]
----
Elasticsearch exception [type=illegal_argument_exception, reason=Number of documents in the shard cannot exceed [2147483519]]
----

TIP: This calculation may differ from the <<search-count,Count API's>> calculation, because the Count API does not include nested documents.
TIP: This calculation may differ from the <<search-count,Count API's>>
calculation, because the Count API does not include nested documents and does
not count deleted documents.

This limit is much higher than the <<shard-size-recommendation,recommended
maximum document count>> of approximately 200M documents per shard.

Try using the <<indices-forcemerge,Force Merge API>> to clear deleted docs. For example:
If you encounter this problem, try to mitigate it by using the
<<indices-forcemerge,Force Merge API>> to merge away some deleted docs. For
example:

[source,console]
----
POST my-index-000001/_forcemerge?only_expunge_deletes=true
----
// TEST[setup:my_index]

This will launch an asynchronous task which can be monitored via the <<tasks,Task Management API>>.

For a long-term solution try:
This will launch an asynchronous task which can be monitored via the
<<tasks,Task Management API>>.

* <<docs-delete-by-query,deleting unneeded documents>>
* aligning the index to recommendations on this page by either
<<indices-split-index,Splitting>> or <<docs-reindex,Reindexing>> the index
It may also be helpful to <<docs-delete-by-query,delete unneeded documents>>,
or to <<indices-split-index,split>> or <<docs-reindex,reindex>> the index into
one with a larger number of shards.
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@ public enum ReferenceDocs {
NETWORK_BINDING_AND_PUBLISHING,
SNAPSHOT_REPOSITORY_ANALYSIS,
S3_COMPATIBLE_REPOSITORIES,
LUCENE_MAX_DOCS_LIMIT,
// this comment keeps the ';' on the next line so every entry above has a trailing ',' which makes the diff for adding new links cleaner
;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@
import org.elasticsearch.action.support.SubscribableListener;
import org.elasticsearch.cluster.metadata.DataStream;
import org.elasticsearch.cluster.service.ClusterApplierService;
import org.elasticsearch.common.ReferenceDocs;
import org.elasticsearch.common.Strings;
import org.elasticsearch.common.lucene.LoggerInfoStream;
import org.elasticsearch.common.lucene.Lucene;
import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader;
Expand Down Expand Up @@ -1688,7 +1690,13 @@ private Exception tryAcquireInFlightDocs(Operation operation, int addingDocs) {
final long totalDocs = indexWriter.getPendingNumDocs() + inFlightDocCount.addAndGet(addingDocs);
if (totalDocs > maxDocs) {
releaseInFlightDocs(addingDocs);
return new IllegalArgumentException("Number of documents in the shard cannot exceed [" + maxDocs + "]");
return new IllegalArgumentException(
Strings.format(
"Number of documents in the shard cannot exceed [%d]; for more information, see [%s]",
maxDocs,
ReferenceDocs.LUCENE_MAX_DOCS_LIMIT
)
);
} else {
return null;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,6 @@
"ALLOCATION_EXPLAIN_API": "cluster-allocation-explain.html",
"NETWORK_BINDING_AND_PUBLISHING": "modules-network.html#modules-network-binding-publishing",
"SNAPSHOT_REPOSITORY_ANALYSIS": "repo-analysis-api.html",
"S3_COMPATIBLE_REPOSITORIES": "repository-s3.html#repository-s3-compatible-services"
"S3_COMPATIBLE_REPOSITORIES": "repository-s3.html#repository-s3-compatible-services",
"LUCENE_MAX_DOCS_LIMIT": "size-your-shards.html#troubleshooting-max-docs-limit"
}