Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Occasional "Error on ingesting results from rule evaluation with different value but same timestamp" warnings in the log #2887

Closed
hasso opened this Issue Jun 29, 2017 · 2 comments

Comments

Projects
None yet
2 participants
@hasso
Copy link

hasso commented Jun 29, 2017

There are occasional warnings in my prometheus log:

Jun 29 10:05:18 collector prometheus[13853]: time="2017-06-29T10:05:18Z" level=warning msg="Error on ingesting results from rule evaluation with different value but same timestamp" numDropped=1 source="manager.go:313"`

Changing log level to debug also prints this:

Jun 29 10:05:18 collector prometheus[13853]: time="2017-06-29T10:05:18Z" level=debug msg="Rule evaluation result discarded" error="sample with repeated timestamp but different value" sample=ALERTS{alertname="highConversionDrop", alertstate="pending", instance="worker1", severity="major"} => 1 @[1498730718.595] source="manager.go:303"

There are more alerts active in the system, but it's always about highConversionDrop which is special – it's calculated in prometheus by recording rule.

Environment

collector ~ $ prometheus -version
prometheus, version 1.7.1 (branch: master, revision: 3afb3fffa3a29c3de865e1172fb740442e9d0133)
  build user:       root@0aa1b7fc430d
  build date:       20170612-11:44:05
  go version:       go1.8.3
collector ~ $ uname -srm
Linux 4.9.34 x86_64
collector ~ $
  • Prometheus configuration file:
global:
  scrape_interval:     10s
  evaluation_interval: 10s

rules_files:
  - /var/lib/prometheus/encoder.rules
  - /var/lib/prometheus/alerts.rules

scrape_configs:
  - job_name: 'collectd'
    static_configs:
      - targets:
        - 172.16.63.254:54001
        - 172.16.67.254:54001
        - 172.16.71.254:54001
        - 172.16.75.254:54001
        - 172.16.79.254:54001
        - 172.16.83.254:54001
        - 172.16.87.254:54001
        - 172.16.91.254:54001
    metric_relabel_configs:
      - source_labels: [exported_instance]
        regex: (.*)
        target_label: instance
        replacement: ${1}
      - source_labels: []
        target_label: exported_instance
        replacement: ""
  • encoder.rules file:
encoder_drop_frames = encoder_input_frames - encoder_output_frames
  • alerts.rules
ALERT highConversionDrop
    IF (encoder_drop_frames > 1) FOR 1m
    LABELS { severity="major" }
    ANNOTATIONS {
        summary = "High drop rate",
        description = "{{ $labels.instance }} drop rate exceeds treshold."
    }
@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 14, 2017

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.