Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataloss after restart #3822

Closed
gtaspider opened this Issue Feb 11, 2018 · 3 comments

Comments

Projects
None yet
2 participants
@gtaspider
Copy link

gtaspider commented Feb 11, 2018

Using Prometheus 2.1.0 for Windows 10 x64 I got the following issue:
Yesterday I recorded some data (test_count from an selfwritten node exporter). Then I shutdown my system. Today I started my system, include prometheus, again but the data from yesterday isn't shown in prometheus/grafana. There are files in data\wal\ with a timestamp from yesterday (000006, 000007, 000008) and from Today (000009, 000010, 000011, where the last file is the biggest (256mb))
I tried some querys, but even sum(test_count) didn't work out (it only sums the data from the last start)

When I now restart prometheus & grafana or my system it doesn't result in data loss, But I had the same issue the day before yesterday (all data point where gone)

Is the some switch I miss like only load the last 6 hour of data or is this a bug?

  • Prometheus configuration file:
insert configuration here
```# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']
  - job_name: 'node'
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9100']

@pgier

This comment has been minimized.

Copy link
Contributor

pgier commented Feb 12, 2018

Are you setting the "storage.tsdb.retention" command line flag? It defaults to 15 days, but if you are setting it to 1d or less, it would cause your data from the previous day to get garbage collected.

@gtaspider

This comment has been minimized.

Copy link
Author

gtaspider commented Feb 12, 2018

No I did not, but after deleting prometheus and unpacking it again it seeems to work today, strange since I hadn't changed anything. I think this can be closed - sorry for the inconvenience...

@gtaspider gtaspider closed this Feb 12, 2018

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.