New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No space left on device while building 3.26Gb image with 11Gb space available #4423
Comments
Could you please share |
@flouthoc there is output of
|
The issue is the space on /var/tmp. Are you sure there is a lot of space on /var/tmp? $ df /var/tmp/ |
$ df /var/tmp/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb3 51290592 37341852 11310916 77% / |
That does not look like it is enough space to store the blobs for the 3.26Gb files. |
How come that 11Gb is not enough for podman to store layers for 3.26Gb image? |
I don't know, but is the 3.26Gb image compressed? Perhaps it is getting uncompressed causing the system to run out of space. |
@rhatdan I dk. Can you tell? registry.gitlab.com/gitlab-org/gitlab-development-kit/gitpod-workspace:gitpod-cleanup In either way, adding a layer on top, should not result in uncompressing all layers and leaving them on disk, if I get the Docker model right. |
It could very well be hitting containers/image#1187. When pulling an image, podman first pulls the layers, then commits them to the storage. Once committed, the downloaded compressed data is removed. |
@vrothberg is it possible to add debug level logging to trace that? I patched Google's My idea is that committing layers as soon as they are ready and cleaning up immediately should same space. Here is the main issue for a bit of context https://gitlab.com/gitlab-org/gitlab-development-kit/-/merge_requests/2782 |
You can add the
Committing as soon as possible already happens but the downloaded compressed data isn't removed as soon as possible but after committing the image. |
@vrothberg if the data is compressed, how it can take 11Gb is the final image is 3.70Gb max? |
The compressed data is downloaded and then gets uncompressed for storing it in the local storage. I do not have another theory that may explain the observation. |
I am kind of running into this too:
I have 1.6G for |
Oh FYI I am running NixOS and I just "fixed" this by cranking up the size on the tmpfs by adding this to my services section:
|
@vrothberg why store it uncompressed? Is it possible to add debug info to prove that? |
@vrothberg @mtrmac @nalindWDYT @rhatdan I have this installed on my rpi4 1gb: When I run this:
podman info:
I tried podman volume prune, etc But not solved yet How to solve Pls |
@neuberfran That indicates the total size of the root filesystem is 259.8 MB (and 30 MB free). Fitting an image with 44.8 MB compressed, and 122 MB uncompressed, is just impossible. |
A friendly reminder that this issue had no activity for 30 days. |
@rhatdan why did you close the issue? Did |
I reopened. |
@mtrmac I have rpi4 1gb (rev. 1.5) here. I think that to run the command below, you need at least a 4gb or 8gb rpi4: Issue I got:
I take this opportunity to challenge you to put the seadog in an rpi4. I guarantee you won't be able to, even watching the videos from the @microhobby channel series on yt |
@neuberfran please don’t mix different issues in the same report, even if they have something in common. This one was originally reporting being out of disk space — and still isn’t diagnosed (and that one is amd64, not a RPi). If we mix running out of memory to the same issue, the chance of either one making any progress will only get smaller. |
@mtrmac I've always been referring to rpi4 (arch64) which I'm using. I never referred to amd64. I hope I'm not disturbing. []s |
@neuberfran That “out of memory” report would quite likely be welcome separately; just not in this issue. |
A friendly reminder that this issue had no activity for 30 days. |
I'm still affected by this one and it can be reproduced quite dramatically by pulling the Silverblue container from Quay using Buildah. Example Containerfile: FROM quay.io/fedora-ostree-desktops/silverblue:39 This one 1.8GB image ends up filling my VM's disk before even finishing the pull:
I've cleared out
I think this is a good container for trying to reproduce this issue, as it seems to contain thousands of tiny files, as it's a whole OS filesystem. This is a basic Debian 12 installation in a QEMU VM with Podman installed from the Debian 12 repos, and the filesystem on ext4 with default settings. Buildah is being run rootless as an unprivileged user. Some info: PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/" $ apt list buildah
Listing... Done
buildah/stable,now 1.28.2+ds1-3+b1 amd64 [installed,automatic] $ buildah version
Version: 1.28.2
Go Version: go1.19.8
Image Spec: 1.1.0-rc2
Runtime Spec: 1.0.2-dev
CNI Spec: 1.0.0
libcni Version:
image Version: 5.23.1
Git Commit:
Built: Thu Jan 1 00:00:00 1970
OS/Arch: linux/amd64
BuildPlatform: linux/amd64 Edit: I don't seem to be affected on Fedora 38 or 39 so it would seem like this has actually been fixed upstream and the issue is down to Debian continuing to ship the older Podman version. |
Hello, I have exactly the same problem on quay.io/fedora/fedora-silverblue:39. With Buildah with the VFS storage driver. The idea is to build on a gitlab runner. Have you found a solution? |
Using the VFS storage driver simply is that costly with images that have that many layers. Arrange to use the overlay driver. |
Well a workaround but not a solution. My fix was to move my GitLab Runner from a Debian 12 base to a CentOS Stream 9 base. Seems that CentOS ships a new enough version of Podman to avoid this issue, or maybe just a better default Podman config. But it does sound based on other comments that there might be a solution available based on configuring alternative storage drivers? |
I understand that VFS consumes, however if I'm using a Gitlab Runner Kubernetes, it's a container that runs Buildah and I don't want to have to mount /var/containers let alone give it SYS_ADMIN CAPS. |
Description
Can't build the image on my local machine. https://gitlab.com/gitlab-org/gitlab-development-kit/-/tree/main/support/gitpod
Steps to reproduce the issue:
git clone https://gitlab.com/gitlab-org/gitlab-development-kit
cd gitlab-development-kit
podman build -f Dockerfile -t gp . --logfile build.log
Describe the results you received:
Describe the results you expected:
I expected
podman
to report what it is doing, why there is not enough space. What takes that space and how to fix that.Output of
rpm -q buildah
orapt list buildah
:Output of
buildah version
:Output of
podman version
if reporting apodman build
issue:Output of
cat /etc/*release
:Output of
uname -a
:Output of
cat /etc/containers/storage.conf
:The text was updated successfully, but these errors were encountered: