-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When using the Index Threshold rule type, add ability to check the count of buckets #133030
Comments
Pinging @elastic/response-ops (Team:ResponseOps) |
Wondering how we'd structure this in terms of the params. The easiest thing would be to add a new "AggType":
Maybe call it, It would like Alternatively, would there be something more direct involving cardinality? So, add a |
The index threshold rule type currently doesn't support any filters, though there are issues open to add this. Which makes me wonder if there's another rule type - presumably in o11y - that might already do this out of the box. It looks like maybe the Metrics Threshold could? I think the potential problem with this is that the o11y rules typically configure the indices searched via Kibana config (probably per-space), and so it could be clumsy to set this up for some unique indices, and then could stop working if someone resets the config. |
All great info @pmuellr. Note that I am also currently looking at how I might be able to use Transforms to structure the data in a way that might allow using the Elasticsearch Query rule type. (In which case maybe what is needed is an update to the documentation on how to do this.) |
Transforms seem like a good approach to making data from an index easier to handle from the alerting side. I think doc-wise, we'd just suggest this is a possibility, since presumably the work entails creating the transform, then building the rule to operate against the new index created by the transform. Are you thinking some per-ruleType help, on how to structure such transforms for use with a particular rule type? |
I did some research last week and it does look like Transforms could provide a mechanism to get the number of buckets for a particular field. (I used a "latest" transform and specified the field I wanted bucketed as a "unique key". The Transform seemed to keep the latest document for each unique key. I was then able to create a rule that looked at the Transform index going back several hours, and verified that the number of documents was X.)
This makes sense and could be useful for folks versed in what is possible in Watcher, but who might not know all the possibilities Elasticsearch offers.
I'm not exactly sure? I'm not sure if per-ruleType is needed (that might be too detailed). For my particular case, I just needed to know that Transforms existed and that they could be used to manipulate the data to a form better suited to my rule. It might be that the best place for a blurb about Transforms is in the section that talks about Watcher vs. Kibana Alerting? (Since, afaik, we don't have a migration guide.) |
Ya, I'm on-board with some section in the docs talking about other things you can do to make your existing data fit rules - transforms being one tool, another example would be using a filtered alias if the rule type doesn't otherwise have filters, etc. A |
cc: @shanisagiv1 |
Describe the feature:
I'd like to use the Index Threshold rule type to bucket on a particular field, then I'd like to check that the count of buckets matches a particular number.
Describe a specific use case for the feature:
We have several Watches that we'd like to move over to Kibana Alerting. These Watches query an index with a set of filters, then they aggregate those results, then the
condition
within the Watch verifies that the count of buckets matches a particular number.For a concrete example: We have a service that sends data to 3 cloud providers - AWS, GCP, Azure. When it sends data it logs some info about the data it is sending. This info includes a field called
cloud.provider
. We'd like to make sure that, when looking at the last few hours of logs, we have logs indicating that we are sending data to all 3 cloud providers. We do this by bucketing bycloud.provider
and then we make sure that we have 3 buckets.The text was updated successfully, but these errors were encountered: