-
Notifications
You must be signed in to change notification settings - Fork 8.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Stack Monitoring] Logstash Overview Panel missing due to max_buckets #56461
Comments
@igoristic Is this something you can look into? |
@chrisronline @igoristic What if you changed the And by paginate through the results, I would keep the behavior of |
We were accidentally getting all the pipelines on the Overview page just to see if there is a single bucket (to decide if we want to show Logstash stats). And, since this method had a bug that fetched all @pickypg I stress tested this with 100 generator pipelines which did not cause any max buckets errors, and jvm spikes seem to be significantly lower. But, I would like to know how it behaves with your environment Thanks to @simianhacker's suggestion I did investigated using GET *:.monitoring-logstash-6-*,*:.monitoring-logstash-7-*,*:monitoring-logstash-7-*,*:monitoring-logstash-8-*,.monitoring-logstash-6-*,.monitoring-logstash-7-*,monitoring-logstash-7-*,monitoring-logstash-8-*/_search
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"term": {
"cluster_uuid": "So2SpBkMT-yvN311fn8q3A"
}
},
{
"range": {
"logstash_stats.timestamp": {
"format": "epoch_millis",
"gte": 1582256420161,
"lte": 1582260020161
}
}
}
]
}
},
"aggs": {
"check": {
"composite": {
"size": 1000,
"sources": [
{
"timestamp": {
"date_histogram": {
"field": "logstash_stats.timestamp",
"fixed_interval": "30s"
}
}
}
]
},
"aggs": {
"pipelines_nested": {
"nested": {
"path": "logstash_stats.pipelines"
},
"aggs": {
"by_pipeline_id": {
"terms": {
"field": "logstash_stats.pipelines.id",
"include": ["random_00", "random_01", "random_02", "random_03", "random_04"],
"size": 1000
},
"aggs": {
"to_root": {
"reverse_nested": {},
"aggs": {
"node_count": {
"cardinality": {
"field": "logstash_stats.logstash.uuid"
}
}
}
}
}
}
}
}
}
}
}
} However, implementing this in the I also, played around with |
Given a large enough number of Logstash Pipelines, I have run into a situation where the Logstash Panel does not appear under the Deployment/Cluster overview because Elasticsearch is rejecting the Logstash search due to too many buckets.
I saw this while running v7.5.2 with:
Workaround
For anyone running into this situation, there are at least three workarounds:
search.max_buckets
soft limit in the cluster setting's of the Monitoring cluster that contains the monitoring indices. This can be done dynamically via the_cluster/settings
and it defaults to10000
buckets. Do this with caution because the soft limit exists to limit memory usage in Elasticsearch.cluster_uuid
in it, (using either approach above) and navigate to it directly.The text was updated successfully, but these errors were encountered: