Configurable shard_size default for term aggregations #84744
Labels
:Analytics/Aggregations
Aggregations
>enhancement
Team:Analytics
Meta label for analytical engine team (ESQL/Aggs/Geo)
Description
Request:
The ability to set the default shard_size for the terms aggregation in index settings and/or in advanced kibana settings.
Problem Statement:
In our environment, we have user groups that prefer to use lens to "slice and dice" their data. One common theme that we are starting to see is that when these users use the term aggregation, they will often point out data discrepancies with averages, median, and similar metrics. When these data discrepancies are brought to our engineers, we layout all the reasons why as described in the below link. Often we direct the end user to use an aggregation based visualization in Kibana and provider a recommendation of the shard_size to be used in the input json section. This resolves the data discrepancy almost all of the time. However we commonly receive suggestions by our user groups that they don't want to set the shard_size every time they create a visualization. Reasons are, they often forget to specify it, they really don't know what it does and miss use it, and some of the user groups prefer to use lens(no shard_size support).
Our developers are responsible for defining index/component templates. It would be ideal if our developers could define default shard_size in an index\component template as an index setting. If not, perhaps the advanced section in Kibana would suffice? I think that allowing a way for advanced users (developers\engineers\admins) to optionally configure the default shard_size would result in fewer reported data discrepancies, reduced time triaging from the technical teams, and better experience for all.
Purposed template settings the default shard_size for term aggregations.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-shard-size
The text was updated successfully, but these errors were encountered: