Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/prometheusreceiver] implement append native histogram #28663

Merged
merged 47 commits into from
Apr 9, 2024

Conversation

krajorama
Copy link
Contributor

@krajorama krajorama commented Oct 27, 2023

Description:

Implement native histogram append MVP.
Very similar to appending a float sample.

Limitations:

  • Only support integer counter histograms fully.
  • In case a histogram has both classic and native buckets, we only store one of them. Governed by scrape_classic_histograms scrape option. The reason is that in the OTEL model the metric family is identified by the normalized name (without _count, _sum, _bucket suffixes for the classic histograms), meaning that the classic and native histograms would map to the same metric family in OTEL model , but that cannot have both Histogram and ExponentialHistogram types at the same time.
  • Gauge histograms are dropped with warning as that temporality is unsupported, see Support for "Gauge histogram" data type, instrumentation opentelemetry-specification#2714
  • NoRecordedValue attribute might be unreliable. Prometheus scrape marks all series with float NaN values when stale, but transactions in prometheusreceiver are stateless, meaning that we have to use heuristics to figure out if we need to add a NoRecordedValue data point to an Exponential Histogram metric. (Need work in Prometheus.)

Additionally:

  • Created timestamp supported.
  • Float counter histograms not fully tested and lose precision, but we don't expect instrumentation to expose these anyway.

Link to tracking Issue:

Fixes: #26555

Testing:

Added unit tests and e2e tests.

Documentation:

TBD: will have to call out protobuf negotiation while no text format. #27030

@krajorama krajorama requested review from Aneurysm9, dashpole and a team as code owners October 27, 2023 12:47
@krajorama krajorama marked this pull request as draft October 27, 2023 12:47
@github-actions github-actions bot added the receiver/prometheus Prometheus receiver label Oct 27, 2023
@krajorama
Copy link
Contributor Author

krajorama commented Oct 30, 2023

Seems like I need to create a new metricFamily that has type MetricTypeExponentialHistogram .

The only issue seems like that Prometheus and Grafana Agent supports scraping classic and native (exponential) histogram version of the same metric, making the key (which is the normalized metric name) the same for the classic and native histogram. So I'm not totally sure what to do here. Maybe reuse the metric family, but make it possible to have a metricGroup inside that has the exponential histogram. This would be possible to route for prometheus remote write exporter, but I doubt it would work for otel.

Or MVP: do not enable scraping classic histograms when there is native histogram. Will not cause an issue for OTEL. Only people that would get bit by this is someone trying to switch from prometheus to otel but using classic + native histograms at the same time (i.e. migrating).

@dashpole
Copy link
Contributor

do not enable scraping classic histograms when there is native histogram. Will not cause an issue for OTEL. Only people that would get bit by this is someone trying to switch from prometheus to otel but using classic + native histograms at the same time (i.e. migrating).

Could we add config to control whether we keep the native vs the fixed bucket histogram?

Is there any existing configuration in the prometheus server that controls whether native histograms are enabled or not?

@krajorama
Copy link
Contributor Author

do not enable scraping classic histograms when there is native histogram. Will not cause an issue for OTEL. Only people that would get bit by this is someone trying to switch from prometheus to otel but using classic + native histograms at the same time (i.e. migrating).

Could we add config to control whether we keep the native vs the fixed bucket histogram?

Is there any existing configuration in the prometheus server that controls whether native histograms are enabled or not?

prometheus will start scraping native histograms only if you enable the feature. And you can ask it to also scrape classic version of the native histograms (since the metric names don't conflict) see scrape_classic_histograms in scrape config.

@dashpole
Copy link
Contributor

dashpole commented Nov 3, 2023

Ideally we would only scrape native histograms if the feature flag is enabled, and only scrape classic histograms if scrape_classic_histograms is set.

point.SetScale(fh.Schema)
point.SetCount(uint64(fh.Count))
point.SetSum(fh.Sum)
point.SetZeroCount(uint64(fh.ZeroCount))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no point.SetZeroThreshold function :(

@krajorama
Copy link
Contributor Author

@dashpole I've opened another PR #29153 to set up for testing native histogram scrape here e2e, ptal.

@krajorama
Copy link
Contributor Author

Rebased onto #29153 and added first e2e test.

@krajorama
Copy link
Contributor Author

@dashpole @bogdandrutu
I'm currently looking at func (a *initialPointAdjuster) adjustMetricHistogram for adding func (a *initialPointAdjuster) adjustMetricExponentialHistogram , but I'm a little confused at how good this has to be?

I mean it's not possible to 100% tell from a histogram if it was restarted between two scrapes.

A possible bug in the current code is checking the Sum , since we can observe negative values that will reduce the sum.
On the other hand it is possible that the Count increases, but there's actually a reduction in individual buckets, so in theory the code should be looking at bucket counts.

@dashpole
Copy link
Contributor

IIUC, the sum should not be reported if it includes negative observations (at least for "standard" histograms, since sum is a counter). https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#histogram. Is sum not required to be monotonic for native histograms?

On the other hand it is possible that the Count increases, but there's actually a reduction in individual buckets, so in theory the code should be looking at bucket counts.

Can you clarify? Are you talking about buckets being merged? Or about the histogram resetting, but receiving more observations than it previously had since the reset?

@krajorama
Copy link
Contributor Author

krajorama commented Nov 14, 2023

IIUC, the sum should not be reported if it includes negative observations (at least for "standard" histograms, since sum is a counter). https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#histogram. Is sum not required to be monotonic for native histograms?

I made a simple test with observing some negative values and checked /metrics:

# HELP golang_manual_histogram This is a histogram with manually selected parameters
# TYPE golang_manual_histogram histogram
golang_manual_histogram_bucket{address="0.0.0.0",generation="20",port="5001",le="-5"} 1
golang_manual_histogram_bucket{address="0.0.0.0",generation="20",port="5001",le="-1"} 2
golang_manual_histogram_bucket{address="0.0.0.0",generation="20",port="5001",le="1"} 3
golang_manual_histogram_bucket{address="0.0.0.0",generation="20",port="5001",le="5"} 3
golang_manual_histogram_bucket{address="0.0.0.0",generation="20",port="5001",le="+Inf"} 3
golang_manual_histogram_sum{address="0.0.0.0",generation="20",port="5001"} -12
golang_manual_histogram_count{address="0.0.0.0",generation="20",port="5001"} 3

To quote the documentation : "The sum of observations (showing up as a time series with a _sum suffix) behaves like a counter, too, as long as there are no negative observations. "

So there is no guarantee that the sum actually behaves like a counter.

On the other hand it is possible that the Count increases, but there's actually a reduction in individual buckets, so in theory the code should be looking at bucket counts.

Can you clarify? Are you talking about buckets being merged? Or about the histogram resetting, but receiving more observations than it previously had since the reset?

The second case. Let's suppose you have 4 observations in the 0th offset positive bucket and that's the only bucket in use, so your count is 4. For the next scrape , schema doesn't change , and you get 2 observations in the 0th, 1st, 2nd positive buckets, so your overall count is 6, but obviously there was a restart since 0th bucket count went down.

See code starting from https://github.com/prometheus/prometheus/blob/1bfb3ed062e99bd3c74e05d9ff9a7fa4e30bbe21/tsdb/chunkenc/histogram.go#L272 .

But this is all academic if it's not actually that important to detect these cases :)

@dashpole
Copy link
Contributor

The second case. Let's suppose you have 4 observations in the 0th offset positive bucket and that's the only bucket in use, so your count is 4. For the next scrape , schema doesn't change , and you get 2 observations in the 0th, 1st, 2nd positive buckets, so your overall count is 6, but obviously there was a restart since 0th bucket count went down.

Those cases seem rare in practice, and we've accepted those thus far as unsolvable. I'm OK with not properly handling those.

It sounds like using count to detect resets is probably our best bet.

IIRC, prom protobuf is going to begin include start timestamps soon, so hopefully these problems are obsoleted over time.

@krajorama
Copy link
Contributor Author

The second case. Let's suppose you have 4 observations in the 0th offset positive bucket and that's the only bucket in use, so your count is 4. For the next scrape , schema doesn't change , and you get 2 observations in the 0th, 1st, 2nd positive buckets, so your overall count is 6, but obviously there was a restart since 0th bucket count went down.

Those cases seem rare in practice, and we've accepted those thus far as unsolvable. I'm OK with not properly handling those.

It sounds like using count to detect resets is probably our best bet.

IIRC, prom protobuf is going to begin include start timestamps soon, so hopefully these problems are obsoleted over time.

Understood, the one thing we have to make sure is that the exported/prometheusremotewrite MUST NOT set the the ResetHint to "NO", unless it's absolutely sure that there was no reset (as defined by Prometheus). Currently the hint is left to "UNKOWN" which is fine.

So basically it would be possible to set "NO" only with the create timestamp, since the current algorithm above is not compatible with the Prometheus definition of counter reset. cc @Aneurysm9 (as maintainer of the exporter).

@krajorama
Copy link
Contributor Author

@dashpole this is mostly finished, I want to add a reference to a Prometheus issue we need to create and also test in real life with a couple of thousands of native histogram metrics.

@dashpole
Copy link
Contributor

Understood, the one thing we have to make sure is that the exported/prometheusremotewrite MUST NOT set the the ResetHint to "NO", unless it's absolutely sure that there was no reset (as defined by Prometheus). Currently the hint is left to "UNKOWN" which is fine.

So basically it would be possible to set "NO" only with the create timestamp, since the current algorithm above is not compatible with the Prometheus definition of counter reset.

Would you mind adding a comment in the exporter(s), referencing prometheus' definition of reset to ensure that isn't changed? We could explicitly set it to unknown to make it clear that it is the correct value for us. It can be done separately from this PR.

codeboten pushed a commit that referenced this pull request Nov 15, 2023
…negotiation (#29153)

The code needs some basic tests that can be later expanded with tests
for native histograms use cases.

Changes:
Refactored `testComponent` function to be easier to customize the
configuration of the scrape.
Expanded `compareHistogram` to assert on the explicit boundaries as
well.
Added function `prometheusMetricFamilyToProtoBuf` to helpers to be able
to turn serialize a Prometheus metric family
into Protobuf.
Added simple test of Protobuf based scrape of counters, gauges,
summaries and histograms.

#26555  

Followup to #27030 
Related to #28663 

**Testing:**

Adding simple e2e test for scraping over Protobuf. 

**Documentation:** 

Not applicable.

---------

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Co-authored-by: David Ashpole <dashpole@google.com>
@krajorama krajorama marked this pull request as ready for review November 17, 2023 10:21
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
@krajorama
Copy link
Contributor Author

Makes sense to wait for #30934 to finalize documentation side and get bugfix prometheus/prometheus#13021 as well.

I see it was merged. I updated the branch locally, but running into an issue with the handling of native_histogram_min_bucket_factor in the scrape config. Not sure if I need to fix something in Promtheus or here yet.

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
@krajorama
Copy link
Contributor Author

Makes sense to wait for #30934 to finalize documentation side and get bugfix prometheus/prometheus#13021 as well.

I see it was merged. I updated the branch locally, but running into an issue with the handling of native_histogram_min_bucket_factor in the scrape config. Not sure if I need to fix something in Promtheus or here yet.

It's a bug in Promtheus. Have to make the fix and get it into otel as well

Adopt open-telemetry#31908

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Test with negative schema as well

Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
@krajorama
Copy link
Contributor Author

Depends on prometheus/prometheus#13846 and backport to 0.50.2 at least

@krajorama
Copy link
Contributor Author

Depends on prometheus/prometheus#13846 and backport to 0.50.2 at least

Will be included in 0.51.1 for sure.

}

for _, tt := range tests {
tt := tt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this line redundant?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the loop var fix in Go 1.22 ? We require 1.21 right now. Maybe I'm mixing this up, but there's 3 other tests that have this line (I did copy :) ).

@krajorama
Copy link
Contributor Author

@dashpole Any chance to update Prometheus to 0.51.1 ? I don't have the time currently :(

@dashpole
Copy link
Contributor

dashpole commented Apr 3, 2024

I need to look into #31572 (comment)

@krajorama
Copy link
Contributor Author

After the Prometheus update tests pass again so I think this is ready for review again. I suggest to focus on whether the feature flag is correctly applied.

Copy link
Contributor

@dashpole dashpole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm after enable_protobuf_negotiation is removed from the README

krajorama and others added 2 commits April 5, 2024 19:51
@krajorama
Copy link
Contributor Author

Thank you, I don't have the right to merge. I'll keep updating a branch regularly until merged.

@dashpole dashpole added the ready to merge Code review completed; ready to merge by maintainers label Apr 8, 2024
@jpkrohling jpkrohling merged commit 13fca79 into open-telemetry:main Apr 9, 2024
169 of 170 checks passed
@github-actions github-actions bot added this to the next release milestone Apr 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request ready to merge Code review completed; ready to merge by maintainers receiver/prometheus Prometheus receiver
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Prometheus receiver is not enabling native histograms feature
7 participants