Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remote_read returning 2 times the same value #4822

Open
theonlydoo opened this Issue Nov 5, 2018 · 0 comments

Comments

Projects
None yet
1 participant
@theonlydoo
Copy link

theonlydoo commented Nov 5, 2018

Bug Report

What did you do?
This type of remote_read
untitled diagram 2

each of the 2 serverA and serverB have the same metrics scraped, but not on all jobs.
some jobs are only on 1 server, some jobs are on both

What did you expect to see?
a single metric returned

What did you see instead? Under which circumstances?
2 metrics with the same labels returned

Environment

  • System information:

Linux 4.9.0-6-amd64 x86_64
and
Linux 4.9.133-xxxx-std-ipv6-64 x86_64

  • Prometheus version:
prometheus, version 2.4.3 (branch: HEAD, revision: 167a4b4e73a8eca8df648d2d2043e21bdb9a7449)
  build user:       root@1e42b46043e9
  build date:       20181004-08:42:02
  go version:       go1.11.1
  • Prometheus configuration file:
    on the read_only node:
# my global config
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s
  scrape_timeout:       15s
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'me'

remote_read:
    - url: "http://serverC:9090/api/v1/read"
      read_recent: true
      remote_timeout: 30s
    - url: "http://serverD:9090/api/v1/read"
      read_recent: true
      remote_timeout: 30s
    - url: "http://serverA:9090/api/v1/read"
      read_recent: true
      remote_timeout: 30s
#    - url: "http://serverB:9090/api/v1/read"
#      read_recent: true
#      remote_timeout: 30s

on serverA:

# my global config
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s
  scrape_timeout:       15s
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'batch'

remote_read:
    - url: "http://serverB:9090/api/v1/read"
      read_recent: true
      remote_timeout: 30s

and no remote_read on serverB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.