-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka Consumer offset lag metrics #609
Comments
I found a similar example in kafka-connect.yml: jmx_exporter/example_configs/kafka-connect.yml Lines 22 to 31 in 73ad291
Maybe you can start with this and adapt it? If that doesn't work, please let us know how exactly the JMX bean and attributes are named, for example by attaching |
@fstab This seems correct, thanks ! 🙏 However, for some reason, I'm seeing this kind of output from
I can probably ignore the |
If I set my configuration like this to fetch only a single attribute, rules:
- pattern: kafka.consumer<type=consumer-fetch-manager-metrics, client-id=(.+), topic=(.+), partition=(.+)><>records-lag
name: kafka_connect_consumer_fetch_records_lag
labels:
clientId: "$1"
topic: "$2"
partition: "$3"
help: "Kafka Connect JMX metric type consumer-fetch-manager"
type: GAUGE Then, initially, I see the metric 3 times in
After a few minutes of waiting, I see the same metrics but 2 values are
|
I have exactly the same issue using this configuration. @conradkleinespel could you resolve it? |
@tadam313 Unfortunately no |
This configuration works as expected: - pattern: kafka.consumer<type=consumer-fetch-manager-metrics, client-id=(.+), topic=(.+), partition=(.+)><>(records-lag[a-zA-Z-]+|records-lag)
name: kafka_connect_consumer_fetch_$4
labels:
clientId: "$1"
topic: "$2"
partition: "$3"
help: "Kafka Connect JMX metric type consumer-fetch-manager"
type: GAUGE
The order is important, it looks like - pattern: kafka.consumer<type=consumer-fetch-manager-metrics, client-id=(.+), topic=(.+), partition=(.+)><>(records-lag.*)
I hope this helps. |
@superfav Thanks for your help, this does fix the issue on my side too! I had a quick look at the JMX exporter docs, it says the pattern is not anchored: from what I understand, it means |
Including `records-lag-avg` and `records-lag-max` explicitly avoids `kafka_connect_consumer_fetch_records_lag` mixing its values as seen here: ``` kafka_connect_consumer_fetch_records_lag{clientId="foo",partition="0",topic="bar",} 0.0 kafka_connect_consumer_fetch_records_lag{clientId="foo",partition="0",topic="bar",} NaN kafka_connect_consumer_fetch_records_lag{clientId="foo",partition="0",topic="bar",} NaN ``` After applying this change: ``` # HELP kafka_connect_consumer_fetch_records_lag Kafka Connect JMX metric type consumer-fetch-manager # TYPE kafka_connect_consumer_fetch_records_lag gauge kafka_connect_consumer_fetch_records_lag{clientId="foo",partition="0",topic="bar",} 0.0 # HELP kafka_connect_consumer_fetch_records_lag_avg Kafka Connect JMX metric type consumer-fetch-manager # TYPE kafka_connect_consumer_fetch_records_lag_avg gauge kafka_connect_consumer_fetch_records_lag_avg{clientId="foo",partition="0",topic="bar",} NaN # HELP kafka_connect_consumer_fetch_records_lag_max Kafka Connect JMX metric type consumer-fetch-manager # TYPE kafka_connect_consumer_fetch_records_lag_max gauge kafka_connect_consumer_fetch_records_lag_max{clientId="foo",partition="0",topic="bar",} NaN ```
Closing as resolved. |
I have instantiated jmx-exporter-prometheus containers in many Kafka services. Some metrics are being exported in duplicate with one of the values being NaN. For example in ksql service:
Can help, pls? |
Hi All,
consumer offset lag metrics through jmx exporter is not working, I am having the following in the config files. however it doesn't fetch the details required.
name: kafka_$1_$3
labels:
client-id: $2
Version details:
Kafka version: kafka_2.13-2.7.1
jmx exporter: https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.15.0/jmx_prometheus_javaagent-0.15.0.jar
Please let me know if any other details required.
Thanks
The text was updated successfully, but these errors were encountered: