Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Prometheus Receiver] Modify existing MetricBuilder to accept unorganized datapoints from Prometheus ScrapeLoop #6400

Closed
PaurushGarg opened this issue Nov 22, 2021 · 2 comments
Labels
comp:prometheus Prometheus related issues comp: receiver Receiver Stale

Comments

@PaurushGarg
Copy link
Member

PaurushGarg commented Nov 22, 2021

Describe the bug
Existing OTEL Prometheus Receiver metricBuilder expects Prometheus scrape loop to send data-points organized and grouped by metricName. Currently, Prometheus doesn’t provide data-point organized and grouped by metric name for failed scrapes, and this causes non-deterministic generation of metrics by the metric builder for the failed scrapes.

Describe the solution you'd like
Currently, metricBuilder aggregates the metric data-points until new metricFamily is encountered. And upon encountering new metricFamily it converts the aggregated data-points into the metric.
This approach causes non-deterministic output upon encountering data-points that are in random order.
As per Prometheus, it is valid but rare for scrape loop to send metrics in random order even for successful scrapes.
One proposed solution is to modify the metricBuilder logic to buffer (aggregate) and sort the datapoints in AddDataPoint method and completely move the metric building logic to the commit logic.

Additional context
Related to open-telemetry/wg-prometheus#57
Issues: #6000 #6087
cc: @alolita @Aneurysm9

@PaurushGarg
Copy link
Member Author

@alolita please assign this issue to me. I would like to work on this one.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 7, 2022

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:prometheus Prometheus related issues comp: receiver Receiver Stale
Projects
None yet
Development

No branches or pull requests

3 participants