-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to capture record-lag-max metric for consumer? #670
Comments
See
|
I set the "statistics.interval.ms": 10 for testing. So now consumer.poll() should trigger the stats_cb. But this isnt returning any data ?? Will stats data be returned in poll function ?? |
You need to define a stats_cb callback too, and call poll() at regular intervals to trigger the stats. |
where do we pass the |
|
@edenhill After implementing above, i do get back stats.
|
Stats will only be collected for the currently assigned/consumed partitions. |
@edenhill But why do the stats hold empty values for |
@kasturichavan Can you paste your stats json object? |
We want to scale our kafka consumer with HPA in kubernetes based on kafka custom metric record-lag. Is there a method in confluent-kafka-python that exposes the metrics ? How can we get this data?
Below is the fetch metrics list:
https://docs.confluent.io/current/kafka/monitoring.html#fetch-metrics
Checklist
Please provide the following information:
confluent_kafka.version(0.11.5)
):The text was updated successfully, but these errors were encountered: