Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix sample_and_watermark_test.go for bad luck, repeated test #106325

Merged
merged 1 commit into from
Nov 16, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,13 @@ limitations under the License.
package metrics

import (
"errors"
"fmt"
"math/rand"
"testing"
"time"

compbasemetrics "k8s.io/component-base/metrics"
"k8s.io/component-base/metrics/legacyregistry"
"k8s.io/klog/v2"
testclock "k8s.io/utils/clock/testing"
)
Expand All @@ -36,6 +36,8 @@ const (
numIterations = 100
)

var errMetricNotFound = errors.New("not found")

/* TestSampler does a rough behavioral test of the sampling in a
SampleAndWatermarkHistograms. The test creates one and exercises
it, checking that the count in the sampling histogram is correct at
Expand All @@ -59,9 +61,10 @@ func TestSampler(t *testing.T) {
&compbasemetrics.HistogramOpts{Name: "marks", Buckets: buckets},
[]string{})
saw := gen.Generate(0, 1, []string{})
regs := gen.metrics()
for _, reg := range regs {
legacyregistry.MustRegister(reg)
toRegister := gen.metrics()
registry := compbasemetrics.NewKubeRegistry()
for _, reg := range toRegister {
registry.MustRegister(reg)
}
// `dt` is the admitted cumulative difference in fake time
// since the start of the test. "admitted" means this is
Expand All @@ -83,8 +86,8 @@ func TestSampler(t *testing.T) {
clk.SetTime(t1)
saw.Observe(1)
expectedCount := int64(dt / samplingPeriod)
actualCount, err := getHistogramCount(regs, samplesHistName)
if err != nil {
actualCount, err := getHistogramCount(registry, samplesHistName)
if err != nil && !(err == errMetricNotFound && expectedCount == 0) {
t.Fatalf("For t0=%s, t1=%s, failed to getHistogramCount: %#+v", t0, t1, err)
}
t.Logf("For i=%d, ddt=%s, t1=%s, diff=%s, dt=%s, count=%d", i, ddt, t1, diff, dt, actualCount)
Expand All @@ -94,28 +97,26 @@ func TestSampler(t *testing.T) {
}
}

/* getHistogramCount returns the count of the named histogram */
func getHistogramCount(regs Registerables, metricName string) (int64, error) {
considered := []string{}
mfs, err := legacyregistry.DefaultGatherer.Gather()
/* getHistogramCount returns the count of the named histogram or an error (if any) */
func getHistogramCount(registry compbasemetrics.KubeRegistry, metricName string) (int64, error) {
mfs, err := registry.Gather()
if err != nil {
return 0, fmt.Errorf("failed to gather metrics: %s", err)
return 0, fmt.Errorf("failed to gather metrics: %w", err)
}
for _, mf := range mfs {
thisName := mf.GetName()
if thisName != metricName {
considered = append(considered, thisName)
continue
}
metric := mf.GetMetric()[0]
hist := metric.GetHistogram()
if hist == nil {
return 0, fmt.Errorf("dto.Metric has nil Histogram")
return 0, errors.New("dto.Metric has nil Histogram")
}
if hist.SampleCount == nil {
return 0, fmt.Errorf("dto.Histogram has nil SampleCount")
return 0, errors.New("dto.Histogram has nil SampleCount")
}
return int64(*hist.SampleCount), nil
}
return 0, fmt.Errorf("not found, considered=%#+v", considered)
return 0, errMetricNotFound
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why return an error, if it's not found, then 0 is the correct count.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because there are two ways to get zero: metric not found, or metric found and contains zero. No need to lose that distinction.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually don't think the metric exists until it gets written to...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is part of what is expected.
My point here is let's not needlessly discard a bit of information about why the get method returned zero, since the point of a test is to not assume that everything goes as expected. If that get method ever returns a zero when zero is not what's expected, it can be helpful to have a bit of explanatin of why for the zero.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a metric can't exist until it's written to, then the error condition is actually not here, it exists in like 119. If int64(*hist.SampleCount) is equal to zero, this is a condition we do not expect and that should be an error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am having trouble parsing "... is actually not here, it exists ...".

This is a behavioral unit test of the sample-and-watermark histograms including their underlying machinery. While us developers expect that the HistogramVec has no metrics before it is written, the point of a behavioral unit test is to not assume more than is necessary. The current revision of this PR can distinguish between different pathologies that lead to an unexpected zero. That seems better to me than not helping to identify what went wrong, in the case of an unexpected zero.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While us developers expect that the HistogramVec has no metrics before it is written...

This is a reasonable expectation given that this is how underlying Prometheus implementation actually works.

I am saying this is how it should look:

	for _, mf := range mfs {
		thisName := mf.GetName()
		if thisName != metricName {
			continue
		}
		metric := mf.GetMetric()[0]
		hist := metric.GetHistogram()
		if hist == nil {
			return 0, errors.New("dto.Metric has nil Histogram")
		}
		if hist.SampleCount == nil {
			return 0, errors.New("dto.Histogram has nil SampleCount")
		}
		count := int64(*hist.SampleCount)
		if count == 0 {
		    return 0, errors.New("we should never have a 0 samplecount here")
		}
		return count, nil
	}
	return 0, nil

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is unnecessarily specific for this client of the Prometheus go library to insist that a HistogramVec whose label slice is empty start out in a state where the suggested code in the previous comment executes the return 0, nil statement. Remember that calling NewHistogram produces a Histogram with a sample count of zero. So such a thing is perfectly fine, semantically. A HistogramVec whose label slice is empty can only ever have one Histogram in it. If the HistogramVec implementation were to choose to create the only possible Histogram in this case eagerly, who cares? Maybe somebody with other Prometheus use cases in mind, but I do not think that clients of sample-and-watermark histograms would care.

}