Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upinodes 100% used #2112
Comments
This comment has been minimized.
This comment has been minimized.
|
I see that inodes on '/home' not '/prometheus' are run out. Hardly a prometheus problem. BTW, if you use xfs instead of ext4, then you wont have this problem as XFS allocates inodes dynamically as long as there is a free space remaining you'll be fine |
This comment has been minimized.
This comment has been minimized.
|
The configuration is such that the storage directory is on /home, so the diagnosis is correct. Prometheus creates roughly one file for every time series (unique metric/label combination). This can exhaust inodes. Possible mitigations are using XFS instead of ext4, creating the ext4 filesystem with more inodes, or creating a larger file system than strictly needed. I don't know off hand whether the first two are easy with Kubernetes volumes. |
This comment has been minimized.
This comment has been minimized.
|
This mostly happens if each time series is very short lived. Because of kubelet internals, metrics about containers from the nodes include restart counters and other details that cause unnecessary time series churn. You may be able to fix them up with relabelling, or don't scrape them for now. I also noticed that you have both endpoint and pod targets in the Prometheus configuration. |
This comment has been minimized.
This comment has been minimized.
|
Ok cool..getting a bigger disk fixed it. Looking into what both the endpoint and pods targets gets me. Are these going to be the same metrics? |
brian-brazil
added
the
kind/question
label
Oct 26, 2016
This comment has been minimized.
This comment has been minimized.
|
Thanks @matthiasr |
sekka1
closed this
Oct 27, 2016
leedm777
added a commit
to leedm777/prometheus
that referenced
this issue
Nov 21, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
sekka1 commentedOct 21, 2016
•
edited
What did you do?
I am running prometheus in a kubernetes cluster with an EBS backed disks. After a day or two, it reports it is out of disk space. Looking at the disk, the inodes for the data partition is 100% usage.
What did you expect to see?
For it not to do this. Is this b/c of something in my config files or how im setting it up?
What did you see instead? Under which circumstances?
Environment
System information:
insert output of
uname -srmherePrometheus version:
insert output of
prometheus -versionhereAlertmanager version:
insert output of
alertmanager -versionhere (if relevant to the issue)Prometheus config
Kubernetes pod file
The data is in the
/homedirectory.Logs: