-
Notifications
You must be signed in to change notification settings - Fork 25
"no space left on device" error executing build #24
Comments
Hi there! We use Pivotal Tracker to provide visibility into what our team is working on. A story for this issue has been automatically created. The current status is as follows:
This comment, as well as the labels on the issue, will be automatically updated as the status in Tracker changes. |
So, I now see where those mounts are coming from, so using an external volume won't resolve the issue, at least for me since they're referencing a host-mount which was completely full. I did find this article which looks to be pretty good and works. Perhaps the commands listed there should be run periodically by concourse to keep things fairly clean? |
It looks like I may have had to just give it some time as new worker instances do clean up a bit. However, it looks like several "containers" are still sitting there. I understand keeping the containers used to run tests and do build, but do the results of building containers need to be kept around? Also, are there any tips on trying to keep this space manageable besides trying to shrink container sizes (build/test)? |
There are no known container/volume leaks; if you're running out of space you proooobably just need more, if possible (e.g. scaling up disk size in IaaS). There will always be one container per resource in your pipelines, across workers, but those shouldn't consume too much disk. Containers for failed builds of jobs will also stick around until the next build finishes. If you have large artifacts making their way through your pipelines, I could see that adding up, but we at least make efficient use of local space by making copy-on-write replicas of these assets rather than copying them around wholesale. |
I think you hit the nail on the head regarding the large artefacts. I'll have to see if that's the case. Perhaps a command be added to fly or the UI to clean up failed builds? If the issue is large builds, then perhaps it's a good idea to have a way to clean them up if I don't need them sticking around in order to debug them. |
@sybrandy Agreed - would you mind opening a feature request for that? I'm gonna close this out since that sounds like the likely root cause. |
Done: #26 Thanks. |
Hello,
Wasn't sure what the correct spot for this was, so I'm trying here.
I'm trying to run some tests using a new Java container and it can't finish pulling down the container as it's reporting a "no space left on device" error writing to /var/lib/docker/tmp/.
Running df -h inside the worker containers, I can see that the / and /etc/hosts mounts are completely full and have 71G allocated to them. Should concourse be cleaning up these directories? The latter, /etc/hosts, is weird because looking in the container it appears to be a small file. The former, /, appears to be mostly affected by the /worker-state directory. Under there, there appears to be a number of "live" volumes with 2 being 22 and 30G in size.
If concourse is not supposed to clean these up, when why is this filling up and how do I manage the space without having to restart the worker containers periodically? Yes, I could mount a volume, but that really only delays the inevitable.
The text was updated successfully, but these errors were encountered: