Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus v2.52.0 raises "Error on ingesting samples with different value but same timestamp" for kube-state-metrics #14089

Closed
rgarcia89 opened this issue May 13, 2024 · 19 comments

Comments

@rgarcia89
Copy link

What did you do?

Hello,

With the update to Prometheus v2.52.0 (https://github.com/prometheus/prometheus/releases/tag/v2.52.0), an error has been introduced in Prometheus logging, indicating duplicated samples from kube-state-metrics.

Consequently, it has triggered the following rule, which is part of the Prometheus Operator's kube-prometheus (https://github.com/prometheus-operator/kube-prometheus) project, which I use for deploying the monitoring environment:

- alert: PrometheusDuplicateTimestamps
  annotations:
    description: Prometheus {{$labels.namespace}}/{{$labels.pod}} is dropping {{ printf "%.4g" $value  }} samples/s with different values but duplicated timestamp.
    runbook_url: https://runbooks.prometheus-operator.dev/runbooks/prometheus/prometheusduplicatetimestamps
    summary: Prometheus is dropping samples with duplicate timestamps.
  expr: |
    rate(prometheus_target_scrapes_sample_duplicate_timestamp_total{job=~"prometheus.*",namespace="monitoring"}[5m]) > 0
  for: 10m
  labels:
    severity: warning

I don't see any duplicates in these metrics, which raises the question of why the scrape manager is reporting an issue.
kube-state-metric /metrics

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.1373e-05
go_gc_duration_seconds{quantile="0.25"} 4.3932e-05
go_gc_duration_seconds{quantile="0.5"} 5.7374e-05
go_gc_duration_seconds{quantile="0.75"} 7.8606e-05
go_gc_duration_seconds{quantile="1"} 0.091648893
go_gc_duration_seconds_sum 0.348674908
go_gc_duration_seconds_count 143
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 127
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.21.8"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 9.426192e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.10105044e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.608599e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 7.005798e+06
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 6.29496e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 9.426192e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 8.8285184e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 1.5163392e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 41144
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 8.1780736e+07
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.03448576e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.7155965721111138e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 7.046942e+06
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 2400
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 15600
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 268968
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 619248
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.8220024e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 628721
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.409024e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.409024e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.14024728e+08
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 9
# HELP http_request_duration_seconds A histogram of requests for kube-state-metrics metrics handler.
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.005"} 12
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.01"} 183
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.025"} 184
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.05"} 185
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.1"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.25"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="0.5"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="1"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="2.5"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="5"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="10"} 186
http_request_duration_seconds_bucket{handler="metrics",method="get",le="+Inf"} 186
http_request_duration_seconds_sum{handler="metrics",method="get"} 1.295945279
http_request_duration_seconds_count{handler="metrics",method="get"} 186
# HELP kube_state_metrics_build_info A metric with a constant '1' value labeled by version, revision, branch, goversion from which kube_state_metrics was built, and the goos and goarch for the build.
# TYPE kube_state_metrics_build_info gauge
kube_state_metrics_build_info{branch="",goarch="amd64",goos="linux",goversion="go1.21.8",revision="unknown",tags="unknown",version="v2.12.0"} 1
# HELP kube_state_metrics_custom_resource_state_add_events_total Number of times that the CRD informer triggered the add event.
# TYPE kube_state_metrics_custom_resource_state_add_events_total counter
kube_state_metrics_custom_resource_state_add_events_total 0
# HELP kube_state_metrics_custom_resource_state_cache Net amount of CRDs affecting the cache currently.
# TYPE kube_state_metrics_custom_resource_state_cache gauge
kube_state_metrics_custom_resource_state_cache 0
# HELP kube_state_metrics_custom_resource_state_delete_events_total Number of times that the CRD informer triggered the remove event.
# TYPE kube_state_metrics_custom_resource_state_delete_events_total counter
kube_state_metrics_custom_resource_state_delete_events_total 0
# HELP kube_state_metrics_list_total Number of total resource list in kube-state-metrics
# TYPE kube_state_metrics_list_total counter
kube_state_metrics_list_total{resource="*v1.CertificateSigningRequest",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ConfigMap",result="success"} 1
kube_state_metrics_list_total{resource="*v1.CronJob",result="success"} 1
kube_state_metrics_list_total{resource="*v1.DaemonSet",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Deployment",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Endpoints",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Ingress",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Job",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Lease",result="success"} 1
kube_state_metrics_list_total{resource="*v1.LimitRange",result="success"} 1
kube_state_metrics_list_total{resource="*v1.MutatingWebhookConfiguration",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Namespace",result="success"} 1
kube_state_metrics_list_total{resource="*v1.NetworkPolicy",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Node",result="success"} 1
kube_state_metrics_list_total{resource="*v1.PersistentVolume",result="success"} 1
kube_state_metrics_list_total{resource="*v1.PersistentVolumeClaim",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Pod",result="success"} 1
kube_state_metrics_list_total{resource="*v1.PodDisruptionBudget",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ReplicaSet",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ReplicationController",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ResourceQuota",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Secret",result="success"} 1
kube_state_metrics_list_total{resource="*v1.Service",result="success"} 1
kube_state_metrics_list_total{resource="*v1.StatefulSet",result="success"} 1
kube_state_metrics_list_total{resource="*v1.StorageClass",result="success"} 1
kube_state_metrics_list_total{resource="*v1.ValidatingWebhookConfiguration",result="success"} 1
kube_state_metrics_list_total{resource="*v1.VolumeAttachment",result="success"} 1
kube_state_metrics_list_total{resource="*v2.HorizontalPodAutoscaler",result="success"} 1
# HELP kube_state_metrics_shard_ordinal Current sharding ordinal/index of this instance
# TYPE kube_state_metrics_shard_ordinal gauge
kube_state_metrics_shard_ordinal{shard_ordinal="0"} 0
# HELP kube_state_metrics_total_shards Number of total shards this instance is aware of
# TYPE kube_state_metrics_total_shards gauge
kube_state_metrics_total_shards 1
# HELP kube_state_metrics_watch_total Number of total resource watches in kube-state-metrics
# TYPE kube_state_metrics_watch_total counter
kube_state_metrics_watch_total{resource="*v1.CertificateSigningRequest",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.ConfigMap",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.CronJob",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.DaemonSet",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Deployment",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.Endpoints",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Ingress",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Job",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Lease",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.LimitRange",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.MutatingWebhookConfiguration",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Namespace",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.NetworkPolicy",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.Node",result="success"} 12
kube_state_metrics_watch_total{resource="*v1.PersistentVolume",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.PersistentVolumeClaim",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.Pod",result="success"} 12
kube_state_metrics_watch_total{resource="*v1.PodDisruptionBudget",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.ReplicaSet",result="success"} 12
kube_state_metrics_watch_total{resource="*v1.ReplicationController",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.ResourceQuota",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.Secret",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.Service",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.StatefulSet",result="success"} 12
kube_state_metrics_watch_total{resource="*v1.StorageClass",result="success"} 13
kube_state_metrics_watch_total{resource="*v1.ValidatingWebhookConfiguration",result="success"} 14
kube_state_metrics_watch_total{resource="*v1.VolumeAttachment",result="success"} 15
kube_state_metrics_watch_total{resource="*v2.HorizontalPodAutoscaler",result="success"} 14
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 15.64
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 12
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 8.7568384e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.71559079044e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.36824832e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes 1.8446744073709552e+19

What did you expect to see?

No response

What did you see instead? Under which circumstances?

see logs area

System information

No response

Prometheus version

v2.52.0

Prometheus configuration file

No response

Alertmanager version

No response

Alertmanager configuration file

No response

Logs

ts=2024-05-13T08:34:50.177Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
ts=2024-05-13T08:34:50.177Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
ts=2024-05-13T08:34:50.186Z caller=kubernetes.go:331 level=info component="discovery manager scrape" discovery=kubernetes config=serviceMonitor/gitlab-runner/gitlab-runner/0 msg="Using pod service account via in-cluster config"
...
ts=2024-05-13T08:34:50.192Z caller=kubernetes.go:331 level=info component="discovery manager notify" discovery=kubernetes config=config-0 msg="Using pod service account via in-cluster config"
ts=2024-05-13T08:34:50.197Z caller=klog.go:124 level=error component=k8s_client_runtime func=Errorf msg="Unexpected error when reading response body: context canceled"
ts=2024-05-13T08:34:50.215Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml totalDuration=38.200371ms db_storage=1.931µs remote_storage=107.309µs web_handler=601ns query_engine=967ns scrape=94.875µs scrape_sd=5.973043ms notify=14.947µs notify_sd=312.05µs rules=22.883265ms tracing=5.768µs
ts=2024-05-13T08:34:52.704Z caller=dedupe.go:112 component=remote level=info remote_name=18d395 url=https://prometheus-lab.net/api/v1/write msg="Done replaying WAL" duration=2.55382309s
ts=2024-05-13T08:35:13.709Z caller=scrape.go:1738 level=warn component="scrape manager" scrape_pool=serviceMonitor/monitoring/kube-state-metrics/0 target=https://10.244.1.205:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=1
ts=2024-05-13T08:35:43.437Z caller=scrape.go:1738 level=warn component="scrape manager" scrape_pool=serviceMonitor/monitoring/kube-state-metrics/0 target=https://10.244.1.205:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=1
@rgarcia89
Copy link
Author

I might has something to do with bairhys/prometheus-frigate-exporter#9 and the introduced check for duplicated series #12933

@machine424
Copy link
Collaborator

Yes, starting with v2.52.0 such "duplicates" are no longer ignored.
In the bairhys/prometheus-frigate-exporter#9 case, the client was indeed exposing duplicated values for the same timestamp and a fix was merged.
maybe the same is happening with kube-state-metrics.
A debug log after

err = storage.ErrDuplicateSampleForTimestamp
(with the metric name + labels and maybe the value and timestamp) would be helpful for clients to adjust to the new behaviour.
cc @bboreham as you reviewed the feature.

@rgarcia89
Copy link
Author

@machine424 I already thought that, but I don't see any duplicates in the metrics. So I'm a bit confused right now.

However, I like the idea of showing the failing metrics in the debug log.

@machine424
Copy link
Collaborator

Yes, it's not that easy to debug. If you want to add that log, please go ahead. We'll see if we can add it to any potential v2.52.1.
Otherwise I can open a PR.

@bboreham
Copy link
Member

I see this from the report:

msg="Error on ingesting samples with different value but same timestamp" num_dropped=1

This is intentionally not giving any details on series, just the number.
We could perhaps record the first error, to avoid generating a lot of extra work.

@bboreham
Copy link
Member

Prometheus configuration file
No response

This makes it harder to tell if your problem could be relabeling.

@machine424
Copy link
Collaborator

Actually, now that I'm looking at the code for real, I think a debug log should already be provided via checkAddError

prometheus/scrape/scrape.go

Lines 1781 to 1785 in 3b8b577

case errors.Is(err, storage.ErrDuplicateSampleForTimestamp):
appErrs.numDuplicates++
level.Debug(sl.l).Log("msg", "Duplicate sample for timestamp", "series", string(met))
sl.metrics.targetScrapeSampleDuplicate.Inc()
return false, nil

(no need for the extra debug log)
You can--log.level=debug and see.

@rgarcia89
Copy link
Author

@machine424 you are right the debug log is already implemented. I just deployed one prometheus with debug log level enabled. Seems like kube-state-metrics is indeed producing duplicate samples...

ts=2024-05-13T19:20:40.233Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml totalDuration=95.860644ms db_storage=1.142µs remote_storage=150.634µs web_handler=872ns query_engine=776ns scrape=98.941µs scrape_sd=7.197985ms notify=13.095µs notify_sd=269.119µs rules=54.251368ms tracing=6.745µs
...
ts=2024-05-13T19:21:09.190Z caller=scrape.go:1777 level=debug component="scrape manager" scrape_pool=serviceMonitor/monitoring/kube-state-metrics/0 target=https://10.244.5.6:8443/metrics msg="Duplicate sample for timestamp" series="kube_pod_tolerations{namespace=\"calico-system\",pod=\"calico-kube-controllers-75c647b46c-pg9cr\",uid=\"bf944c52-17bd-438b-bbf1-d97f8671bd6b\",key=\"CriticalAddonsOnly\",operator=\"Exists\"}"
ts=2024-05-13T19:21:09.207Z caller=scrape.go:1738 level=warn component="scrape manager" scrape_pool=serviceMonitor/monitoring/kube-state-metrics/0 target=https://10.244.5.6:8443/metrics msg="Error on ingesting samples with different value but same timestamp" num_dropped=1

@rgarcia89
Copy link
Author

And well it is doing by purpose... not sure what AKS is doing here but the toleration exists two times on the calico-kube-controllers deployment

       tolerations:
       - key: CriticalAddonsOnly
         operator: Exists
       - effect: NoSchedule
         key: node-role.kubernetes.io/master
       - effect: NoSchedule
         key: node-role.kubernetes.io/control-plane
       - key: CriticalAddonsOnly
         operator: Exists

So it seems like everything is working fine on prometheus and kube-state-metrics side 👍

@prymitive
Copy link
Contributor

Yes, starting with v2.52.0 such "duplicates" are no longer ignored.

“Ignored” is probably the wrong word here. It’s a little bit more complicated than that.
You might have some timeseries multiple times with different values, in which case I think the last one will be appended to tsdb, this doesn’t have to be “correct” one.
Or you can even imagine metrics response giving different order of samples on each scrape, which for counters might mean bogus results.

@machine424
Copy link
Collaborator

machine424 commented May 14, 2024

So it seems like everything is working fine on prometheus and kube-state-metrics side 👍

I think this is worth creating an issue on kube-state-metrics as well.
As the tolerations array permits "duplicates", and depending on kube_pod_tolerationsintent, there might be a need to deduplicate or add an index-label or sth to identify each toleration.
In this case, it seems to be "harmless", but perhaps the same approach is applied to other arrays. It’s important to ensure they are aware of this.

@rgarcia89
Copy link
Author

@machine424 will do. Quite confusing to see that "duplicates" are allowed within the tolerations array. I wasn't expecting that.

@rgarcia89
Copy link
Author

Closing here - Since everything is working as expected with Prometheus, thanks to everyone your help!

@bootc
Copy link

bootc commented May 14, 2024

For those finding this issue and wanting to follow on with kube-state-metrics, you want:
kubernetes/kube-state-metrics#2390

@bboreham
Copy link
Member

Thanks for investigation @machine424.
One nit: "Error on ingesting samples with different value but same timestamp" - don't they all have the same value, i.e. 1?
I think this comes from Prometheus re-using an error in a slightly different context.

@machine424
Copy link
Collaborator

machine424 commented May 14, 2024

One nit: "Error on ingesting samples with different value but same timestamp" - don't they all have the same value, i.e. 1?
I think this comes from Prometheus re-using an error in a slightly different context.

Good point. I think we agree that even in such cases (same value), we should continue to consider it as an error. This can help highlight a hidden issue (targets shouldn't rely on Prometheus deduplicating that IIUC). But I'm afraid some targets may be relying on the old behavior, especially the ones with honor_timestamps (No need to clean the exposed metrics, prometheus will take care of that)

That being said, the TSDB doesn't consider samples with the same timestamps and the same value as duplicates; it tolerates that:

if t == msMaxt {
// We are allowing exact duplicates as we can encounter them in valid cases
// like federation and erroring out at that time would be extremely noisy.
// This only checks against the latest in-order sample.
// The OOO headchunk has its own method to detect these duplicates.
if math.Float64bits(s.lastValue) != math.Float64bits(v) {
return false, 0, storage.ErrDuplicateSampleForTimestamp
}
// Sample is identical (ts + value) with most current (highest ts) sample in sampleBuf.
return false, 0, nil

Hence, the explicit warning message.

If we want to maintain the current behavior, I agree we shouldn't use a storage error ErrDuplicateSampleForTimestamp for a scrape phase issue.

@rgarcia89
Copy link
Author

rgarcia89 commented May 14, 2024

I'd also would like to suggest a revision of the warning message triggered by duplicate series in Prometheus. In my experience, the message didn't accurately reflect the situation, as both samples had identical values.

Similarly, the prometheus_target_scrapes_sample_duplicate_timestamp_total counter seems to be incrementing even when the duplicate samples have the same value, which contradicts its intended purpose - at least by the current definition.

While I understand the logic behind rejecting duplicate samples, I'm a bit confused about the implementation, as the underlying TSDB is accepting such cases.

@bboreham
Copy link
Member

Duplicate labels on scrape is a clear logic error (at least in the mind of some people who worked on it).

Duplicate sample fed into TSDB is something that happens, e.g. on some kinds of restart, and we prefer simple logic to always accept it over complicated logic aimed at particular corner cases.

I only wanted to nitpick the wording of the message, not change behaviour. See also #13277 (comment).

Unfortunately it would be more of a breaking change to rename prometheus_target_scrapes_sample_duplicate_timestamp_total.

@machine424
Copy link
Collaborator

Do you think @bboreham that could be done as part of #13277 or should we create an issue for it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants