Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote storage adaptor reads influxDB data only from the default retention policy #2779

Closed
sesto opened this Issue May 29, 2017 · 2 comments

Comments

Projects
None yet
2 participants
@sesto
Copy link

sesto commented May 29, 2017

What did you do?
I have been playing around with the remote storage adaptor to see how the export of data into influxDB works. I have created 2 retention policies "low_density" and "high_density" in influxDB, with the "high_density" policy set as default.

name         duration  shardGroupDuration replicaN default
----         --------  ------------------ -------- -------
autogen      0s        168h0m0s           1        false
low_density  8736h0m0s 168h0m0s           1        false
high_density 1h0m0s    1h0m0s             1        true

I start the remote storage adaptor with the parameter
-influxdb.retention-policy=low_density

What did you expect to see?

I expect that the adaptor will write/read to/from the low_density retention policy

What did you see instead? Under which circumstances?

The adaptor writes data to the low_density retention policy as expected but reads data from the default high_density retention policy

Below is influxDB log:

[httpd] 172.17.0.1 - - [28/May/2017:02:43:31 +0000] "POST /write?consistency=&db=prometheus&precision=ms&rp=low_density HTTP/1.1" 204 0 "-" "InfluxDBClient" 6fbe3de0-434f-11e7-809f-000000000000 2360
[I] 2017-05-28T02:43:34Z SELECT value FROM prometheus.high_density.scrape_duration_seconds WHERE time >= 1496039758357ms AND time <= 1496040058357ms GROUP BY * service=query
[httpd] 172.17.0.1 - - [28/May/2017:02:43:34 +0000] "POST /query?db=prometheus&epoch=ms&params=%7B%7D&q=SELECT+value+FROM+%22scrape_duration_seconds%22+WHERE+time+%3E%3D+1496039758357ms+AND+time+%3C%3D+1496040058357ms+GROUP+BY+%2A HTTP/1.1" 200 62 "-" "InfluxDBClient" 715e8cbd-434f-11e7-80a0-000000000000 5006

Environment
OS X 10.11.4
Prometheus and influxDB are run as docker containers, remote storage adapter as a go application on os, go version 1.8.1

  • System information:

Darwin 15.4.0 x86_64

  • Prometheus version:
prometheus, version 1.6.1 (branch: master, revision: 4666df502c0e239ed4aa1d80abbbfb54f61b23c3)
  build user:       root@7e45fa0366a7
  build date:       20170419-14:32:22
  go version:       go1.8.1
  • Alertmanager version:

    insert output of alertmanager -version here (if relevant to the issue)

  • Prometheus configuration file:

global:
  scrape_interval:     1s # By default, scrape targets every 15 seconds.
  evaluation_interval: 1s # By default, scrape targets every 15 seconds.
scrape_configs:
  - job_name: "test_service"
    scrape_interval: "1s"
    scheme: "http"
    metrics_path: "/mgmt/prometheus"
    static_configs:
      - targets: ['172.16.179.252:8081']

# Remote write configuration (for Graphite, OpenTSDB, or InfluxDB).
remote_write:
  - url: "http://172.16.179.252:9201/write"

# Remote read configuration (for InfluxDB only at the moment).
remote_read:
  - url: "http://172.16.179.252:9201/read"
@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented May 29, 2017

@sesto Thank you for reporting this! #2781 should fix this, can you confirm?

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.