You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Apologies in advance, I was unable to access the slack linked in the bug report menu. I am trying to connect Kafka to a Prometheus instance and I am confused on how index mapping works surrounding Protobuf.
The mapping of fields and the corresponding proto index which will be set as the metric name on Cortex. This is a JSON field.
Example value: {"2":"tip_amount","1":"feedback_ratings"}
Proto field value with index 2 will be stored as metric named tip_amount in Cortex and so on
Type: required
Would "tip_amount" be a record header? Can firehose handle Kafka messages with variable record list lengths?
Thank you,
Liam
The text was updated successfully, but these errors were encountered:
Prometheus sink on firehose is not purposed for connecting from Kafka to Prometheus instances directly, since Prometheus is pull-based. So Prometheus sink on firehose is purposed to push the events from Kafka to TimeSeries database, eg:cortex. The events from Kafka will be parsed to prometheus exposition format
From the example config on guideline section,
if SINK_PROM_METRIC_NAME_PROTO_INDEX_MAPPING set to {"2":"tip_amount","1":"feedback_ratings"}, there will be two metric names stored in tsdb.
Hello,
Apologies in advance, I was unable to access the slack linked in the bug report menu. I am trying to connect Kafka to a Prometheus instance and I am confused on how index mapping works surrounding Protobuf.
From the guide:
Would "tip_amount" be a record header? Can firehose handle Kafka messages with variable record list lengths?
Thank you,
Liam
The text was updated successfully, but these errors were encountered: