Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prometheus runs fine, but no metrics found #1977

Closed
zxwing opened this Issue Sep 11, 2016 · 8 comments

Comments

Projects
None yet
2 participants
@zxwing
Copy link

zxwing commented Sep 11, 2016

The prometheus is running, but all queries return no data, even 'up' returns nothing.

I run it with below command, the logging level is debug

[root@172-20-12-46 ~]# /usr/local/zstack/apache-tomcat-7.0.35/webapps/zstack/WEB-INF/classes/tools/prometheus -config.file /usr/local/zstack/prometheus/conf.yaml -storage.local.path /usr/local/zstack/prometheus/data -query.timeout 2m0s -web.listen-address 172.20.12.46:9090 -query.max-concurrency 20 -alertmanager.url http://172.20.12.46:8080/zstack/prometheus/alert -log.level debug
INFO[0000] Starting prometheus (version=0.20.0, branch=master, revision=aeab25c)  source=main.go:73
INFO[0000] Build context (go=go1.6.2, user=root@77050118f904, date=20160616-08:38:14)  source=main.go:74
INFO[0000] Loading configuration file /usr/local/zstack/prometheus/conf.yaml  source=main.go:206
INFO[0000] Loading series map and head chunks...         source=storage.go:341
INFO[0000] 217 series loaded.                            source=storage.go:346
INFO[0000] Listening on 172.20.12.46:9090                source=web.go:241
INFO[0000] Starting target manager...                    source=targetmanager.go:74

You can see there is no metric showing up
image

however, the collectd_exporter target is running fine and is indicating the scrape was just done 4s ago

image

http://172.20.12.46:9103/metrics does have data
image

querying metadata of 'up' returns nothing:

[root@172-20-12-46 ~]# curl -g 'http://172.20.12.46:9090/api/v1/series?match[]=up'
{"status":"success","data":[]}

My configuration:

image

This is really weird, I have no clue to debug. I remain this setup please suggest some hints to debug this issue. Thank you!

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Sep 11, 2016

Normally not getting any results is caused by timezone issues (browser with a time in the future, trying to query not-yet-there data), but if that were the case, you would at least see metrics in the dropdown.

So this is strange. The 217 series loaded. at startup at least shows that the storage does contain 217 series.

However, you are running Prometheus version 0.20.0, which is quite old. Could you try the most recent release version (1.1.2) before digging deeper?

@zxwing

This comment has been minimized.

Copy link
Author

zxwing commented Sep 11, 2016

@juliusv The timezone is OK I have double checked. And it used to work on this machine but suddenly stops working after a few reboots. I changed to 1.1.2 but no luck.

[root@172-20-12-46 prometheus-1.1.2.linux-amd64]# ./prometheus -config.file /usr/local/zstack/prometheus/conf.yaml -storage.local.path /usr/local/zstack/prometheus/data -query.timeout 2m0s -web.listen-address 172.20.12.46:9090 -query.max-concurrency 20 -log.level debug
INFO[0000] Starting prometheus (version=1.1.2, branch=master, revision=36fbdcc30fd13ad796381dc934742c559feeb1b5)  source=main.go:73
INFO[0000] Build context (go=go1.6.3, user=root@a74d279a0d22, date=20160908-13:12:43)  source=main.go:74
INFO[0000] Loading configuration file /usr/local/zstack/prometheus/conf.yaml  source=main.go:221
INFO[0000] Loading series map and head chunks...         source=storage.go:358
INFO[0000] 173 series loaded.                            source=storage.go:363
INFO[0000] Starting target manager...                    source=targetmanager.go:76
WARN[0000] No AlertManagers configured, not dispatching any alerts  source=notifier.go:176
INFO[0000] Listening on 172.20.12.46:9090                source=web.go:233
INFO[0300] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[0300] Done checkpointing in-memory metrics and chunks in 116.81649ms.  source=persistence.go:572
INFO[0600] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[0600] Done checkpointing in-memory metrics and chunks in 83.08807ms.  source=persistence.go:572
INFO[0900] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[0900] Done checkpointing in-memory metrics and chunks in 96.281239ms.  source=persistence.go:572
INFO[1200] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[1200] Done checkpointing in-memory metrics and chunks in 90.590978ms.  source=persistence.go:572
INFO[1500] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[1500] Done checkpointing in-memory metrics and chunks in 79.940375ms.  source=persistence.go:572
INFO[1740] Completed maintenance sweep through 173 in-memory fingerprints in 28m50.480869598s.  source=storage.go:1086
INFO[1800] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[1800] Done checkpointing in-memory metrics and chunks in 99.356651ms.  source=persistence.go:572
INFO[2100] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[2100] Done checkpointing in-memory metrics and chunks in 50.138952ms.  source=persistence.go:572
INFO[2400] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[2400] Done checkpointing in-memory metrics and chunks in 77.677379ms.  source=persistence.go:572
INFO[2700] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[2700] Done checkpointing in-memory metrics and chunks in 63.295323ms.  source=persistence.go:572
INFO[3000] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[3000] Done checkpointing in-memory metrics and chunks in 70.712856ms.  source=persistence.go:572
INFO[3300] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[3301] Done checkpointing in-memory metrics and chunks in 91.37509ms.  source=persistence.go:572
INFO[3361] Completed maintenance sweep through 161 in-memory fingerprints in 26m50.451344248s.  source=storage.go:1086
INFO[3601] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[3601] Done checkpointing in-memory metrics and chunks in 75.01283ms.  source=persistence.go:572
INFO[3901] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[3901] Done checkpointing in-memory metrics and chunks in 98.695543ms.  source=persistence.go:572
INFO[4201] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[4201] Done checkpointing in-memory metrics and chunks in 64.099585ms.  source=persistence.go:572
INFO[4501] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[4501] Done checkpointing in-memory metrics and chunks in 73.224812ms.  source=persistence.go:572
INFO[4801] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[4801] Done checkpointing in-memory metrics and chunks in 80.125455ms.  source=persistence.go:572
INFO[4981] Completed maintenance sweep through 161 in-memory fingerprints in 26m50.398307479s.  source=storage.go:1086
INFO[5101] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[5101] Done checkpointing in-memory metrics and chunks in 64.778562ms.  source=persistence.go:572
INFO[5401] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[5401] Done checkpointing in-memory metrics and chunks in 71.074775ms.  source=persistence.go:572
INFO[5701] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[5701] Done checkpointing in-memory metrics and chunks in 81.647445ms.  source=persistence.go:572
INFO[6001] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[6001] Done checkpointing in-memory metrics and chunks in 81.368088ms.  source=persistence.go:572
INFO[6301] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[6301] Done checkpointing in-memory metrics and chunks in 118.103705ms.  source=persistence.go:572
INFO[6601] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[6601] Completed maintenance sweep through 161 in-memory fingerprints in 26m50.426083072s.  source=storage.go:1086
INFO[6601] Done checkpointing in-memory metrics and chunks in 104.315766ms.  source=persistence.go:572
INFO[6901] Checkpointing in-memory metrics and chunks...  source=persistence.go:548
INFO[6902] Done checkpointing in-memory metrics and chunks in 71.3209ms.  source=persistence.go:572
@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Sep 11, 2016

@zxwing Just to be extra sure, did you wipe the storage completely and start fresh after upgrading?

@zxwing

This comment has been minimized.

Copy link
Author

zxwing commented Sep 11, 2016

@juliusv No, I just upgrade and use the existing data

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Sep 11, 2016

Could you try with a fresh storage? If you don't want to wipe the old one yet, you could temporarily configure Prometheus to use a different storage path?

@zxwing

This comment has been minimized.

Copy link
Author

zxwing commented Sep 11, 2016

@juliusv sure. Let me test with a fresh storage see if this happens again

@zxwing

This comment has been minimized.

Copy link
Author

zxwing commented Sep 21, 2016

close as it's won't happen in the lastest version

@zxwing zxwing closed this Sep 21, 2016

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 24, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.