New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add metric_selector semantics to .logs.metrics_collected.kubernetes #401
Comments
While I was digging in the source code trying to find a way to exclude my short lived pods from CW metrics I found that amazon-cloudwatch-agent will look for a kubernetes annotation called
I will try to use this k8s annotation but I still think it would be good to have a way to ignore that the cloudwatch-agent administrator can control (as the kubernetes annotation is controlled by whoever deploys). Nonetheless I think the annotation |
Agreeing @ecerulm, the cost was so high with the k8s cluster. |
This issue was marked stale due to lack of activity. |
Closing this because it has stalled. Feel free to reopen if this issue is still relevant, or to ping the collaborator who labeled it stalled if you have any questions. |
@SaxyPandaBear , would it be possible to reopen this issue? |
I would like to see this issues reopened as well. Without it, the CloudWatch Observability EKS add-on creates so many unuseful and expensive metrics. |
The kubernetes configuration brings metrics like
pod_memory_utilization
for all pods in a cluster.I have many pods that are ephemeral / short lived.
I use airflow in kubernets so there are hundreds of those airflow task pods per day, and currently the number CloudWatch metrics that
log.metrics_collected.kubernetes
is really huge, and most of those metrics are not even useful container just 1 data point because the pods don't live enough to produce more that one datapoint. Currently I have ~20000 metrics created this way. I would like collect metrics only for my long-lived pods while excluding my short lived pods (that I can identify by k8s namespace or k8s labels)I think some kind of mechanism to filter / drop / exclude pods would be beneficial. In particular, the following use cases might be of interest to most:
kubernetes_executor: true
, or if any of the following k8s labels are presentairflow_worker: xx
,airflow_version: xx
,dag_id: xxx
,execution_date: xx
,task_id: xx
.logs.metrics_collected.prometheus.emf_processor.metric_declaration
with somesource_labels
(maybe mimicking the__meta_kubernetes_pod_*
of the kubernetes_sd_config)Example of an hypothetical config that will exclude pod based on pod metadata:
The text was updated successfully, but these errors were encountered: