Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upMultiple K8s SD instances causing scrape issues. #2020
Comments
This comment has been minimized.
This comment has been minimized.
|
Flame graph |
This comment has been minimized.
This comment has been minimized.
|
sum(rate(prometheus_target_scrape_pool_sync_total[1m])) shows > 1.5 continuously, |
This comment has been minimized.
This comment has been minimized.
|
The problems may be unrelated to the kube SD, I think the memory chunks settings have been misconfigured. |
This comment has been minimized.
This comment has been minimized.
|
This had nothing to do with the service discovery, and was down to a misconfiguration. |
tcolgate
closed this
Sep 23, 2016
This comment has been minimized.
This comment has been minimized.
shamil
commented
Oct 13, 2016
|
@tcolgate I'm hitting this too... |
This comment has been minimized.
This comment has been minimized.
shamil
commented
Oct 13, 2016
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
lock
bot
locked and limited conversation to collaborators
Mar 24, 2019
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
tcolgate commentedSep 22, 2016
What did you do?
Configured multiple instances of kube sd (9 in total all small clusters)
What did you expect to see?
normal performance
What did you see instead? Under which circumstances?
Target scrapes mostly stay in UNKNOWN or flit in / out
1.1.3 from dockerhub latest
may provide offline
Probably related, we are seeing
We also see this on another instances which is not manifesting problems