-
Notifications
You must be signed in to change notification settings - Fork 1
Advanced features
NOTES: Feature available since (2.3.1).
By setting up ANODOT_MAX_ALLOWED_EPS
environment variable (default is no limit), we can limit ESP(events per second) of Anodot remote write application, do to not breach EPS limit configured for account.
Anodot remote write application, will throttle outgoing requests do not exceed ANODOT_MAX_ALLOWED_EPS
value.
Example:
environment:
ANODOT_MAX_ALLOWED_EPS: 5000
Static tags can be added by using the ANODOT_TAGS
environment variable. Such tags are applied to all metrics which are processed by anodot-remote-write.
Prometheus labels which have the prefix anodot_tag_
will be automatically converted to Anodot Metric Tags and not included as dimension.
More information on the transformation from Prometheus to Anodot metrics can be found here
The following example uses a relabel configuration on a Prometheus server:
metric_relabel_configs:
- source_labels: [job]
regex: (.*)
target_label: anodot_tag_job_name
replacement: tag-${1}
The tag job_name=tag-${job}
will be added to all metrics before sending to Anodot.
Additional examples on how Prometheus labels can be converted to Anodot Metric tags:
- anodot_tag_job_name
->
job_name - anodot_tag_instance_name
->
instance_name
Kubernetes pods are managed by deployments, and Replicasets have unique random names. As a result, each time a pod is re-created, a new random name is assigned to the pod.
For example:
cloudwatch-exporter-945b6685d-hxfcz 1/1 Running 0 123d
elastic-exporter-6c476798f7-6xgls 1/1 Running 0 14d
Prometheus metrics for such a pod will look like the following:
container_memory_usage_bytes{pod_name="elastic-exporter-6c476798f7-6xgls",job="kubelet",namespace="monitoring",node="cluster-node1"}
container_memory_usage_bytes{pod_name="cloudwatch-exporter-945b6685d-hxfcz",job="kubelet",namespace="monitoring",node="cluster-node2",}
Random names are problematic as we sometimes need to maintain the history of a given ReplicaSet behaviour, for example, for detecting anomalies over a period of time. Frequent changes may also cause the metrics churn logic to be ineffective and may create too many temporary redundant metric instances. In most cases, the random name is needed for the pod's lifetime only for troubleshooting purposes. In all other cases, it is only required that the pod will have a unique global name, no matter what it is.
Each pod in a deployment/replicaset/daemonset is assigned a unique label anodot.com/podName=${deployment-name}-${ordinal}
, where the ordinal is incrementally assigned to each pod.
When metrics arrive to anodot-prometheus-remote-write, the original pod
and pod_name
are replaced with the anodot.com/podName
value.
The anodot-prometheus-remote-write application, keeps track of all pod information (mapping between original pod names and pod names under anodot.com/podName
label)
If there is no mapping information for a given pod, such metrics will not be sent to Anodot.
The following example shows how re-writing is done:
kubectl get pods
NAME READY STATUS RESTARTS AGE
cloudwatch-exporter-945b6685d-hxfcz 1/1 Running 0 124d
kubectl describe pods cloudwatch-exporter-945b6685d-hxfcz
Name: cloudwatch-exporter-945b6685d-hxfcz
Namespace: monitoring
Labels: anodot.com/podName=cloudwatch-exporter-0
pod-template-hash=501622418
Kubelet metrics scraped by Prometheus look like the following:
container_memory_usage_bytes{container_name="elastic-seporter",job="kubelet",namespace="monitoring",pod_name="cloudwatch-exporter-945b6685d-hxfcz"}
- After anodot-prometheus-remote-write processes these metrics, the
pod_name
value will be changed tocloudwatch-exporter-0
before sending the metrics to the Anodot platform.
- Install anodot-pod-relabel helm chart by following the steps described here https://github.com/anodot/helm-charts/tree/master/charts/anodot-pod-relabel
- Make sure that
K8S_RELABEL_SERVICE_URL
is set underValues.configuration.env
in anodot-remote-write values.yaml