You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment there are no safeguards against creating histogram and date_histogram requests that produce a huge number of buckets on the reduce node because the number of empty buckets required by extended bounds is extremely high (see #27447 for an example).
We should add a soft limit that ensures that the number of buckets required by the combination of the interval and the extended bounds does not cross a threshold and cause GCs/destabilise the cluster. I suggest we set this limit to 1,000.
The text was updated successfully, but these errors were encountered:
…ed by a request (#27581)
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception).
This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response.
Closes#27452#26012
…ed by a request (#27581)
This commit adds a new dynamic cluster setting named `search.max_buckets` that can be used to limit the number of buckets created per shard or by the reduce phase. Each multi bucket aggregator can consume buckets during the final build of the aggregation at the shard level or during the reduce phase (final or not) in the coordinating node. When an aggregator consumes a bucket, a global count for the request is incremented and if this number is greater than the limit an exception is thrown (TooManyBuckets exception).
This change adds the ability for multi bucket aggregator to "consume" buckets in the global limit, the default is 10,000. It's an opt-in consumer so each multi-bucket aggregator must explicitly call the consumer when a bucket is added in the response.
Closes#27452#26012
At the moment there are no safeguards against creating histogram and date_histogram requests that produce a huge number of buckets on the reduce node because the number of empty buckets required by extended bounds is extremely high (see #27447 for an example).
We should add a soft limit that ensures that the number of buckets required by the combination of the interval and the extended bounds does not cross a threshold and cause GCs/destabilise the cluster. I suggest we set this limit to 1,000.
The text was updated successfully, but these errors were encountered: