-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[btrfs] Sporadic Found incomplete layer error results in broken container engine #16882
Comments
the btrfs backend is not really supported by us, it has several limitations compared to overlay. Does the same issue happen with |
Thank you for reaching out. The hypothesis that a process writing to the local container storage got killed sounds reasonable. The symptom very much suggests it. Can you try running with the |
Thank you for the reply. Yes, I can switch to the I will report back if I find something. If anyone else is able to find a reproducer, that would be great. I keep trying. |
I might have found a way of reproducing something that looks very similar by manually deleting the btrfs subvolume of a container image:
It's not a 100% match, but likely a good enough match to follow the broken code path. It looks to me like while trying to re-use a nonexisting image |
A friendly reminder that this issue had no activity for 30 days. |
Same here (with overlay) didn't manage to reproduce it :/ Output of
Output of
|
A friendly reminder that this issue had no activity for 30 days. |
If it helps, I was able to reproduce this error when I upgraded my hard drive. I unmounted it, took a disk image using GNOME disks, restored it to the new drive, used fdisk/btrfs to resize the new filesystem, and this happened. |
Same here, following |
Same here and I fixed it by removing the reference of the layer (that doesn't exists) in the /var/lib/containers/storage/btrfs-layers/layers.json file. I don't know if it's a better way to solve it but now at least I can manage my containers without losing data. |
Is there something here that Podman or containers/storage needs to do, or are the workarounds good enough? |
This happens me a lot, but with ZFS, so the problem might not be in the storage, but Podman? |
ZFS Storage driver or with file system being on ZFS? |
Yes, I meant storage driver and how Podman can get in an inconsistent state with the created ZFS filesystems, so sometimes I have to manually destroy them and restart containers. |
Sadly we have no expertise in ZFS File system as a storage driver. We would recommend using Overlay over a ZFS lower layer. |
I'm running into a similar issue, no commands work; cant even
Using the |
This (Sporadic Found incomplete layer error results in broken container engine) also happens with ext4 (using overlay), does a new issue need to be opened? |
there were changes to c/storage in the last year that could have fixed this issue. Yours might be a different one. What version of Podman are you using? |
Client: Podman Engine |
First time it's happened to me in a long time of running podman on ZFS. Podman containers failed to run after a podman update this morning with this error: I had to figure out which container that referred to and then delete references to that container in Podman info (zfs storage backend on zfs).
|
I got the same issue everytime I try to delete the files manually it says permission denied. can anyone help me to reproduce the issue |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
A sporadically occurring "Found incomplete layer" error after the nighly automatic system updates on openSUSE MicroOS, results in broken
podman
container engine:Once the error occurs, nothing works anymore. Even a
podman image prune
complains about the same error and fails. The only way to fixpodman
is to manually nuke the/var/lib/containers/storage/btrfs
directory.I'm having this issue on a MicroOS installation with the most recent
podman
version (4.3.1). I have a couple of container running there and this issue occurred now for the second time in a month after the automatic nighly updates. A fellow redditor confirms the issue.The issue arises after a round of automatic updates during the night. It is unclear, if the system update or a run of podman auto-update causes the issue, I have not been able to find a reproducer yet.
Steps to reproduce the issue:
A possible reproducer can be found below
Describe the results you received:
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info
:Package info (e.g. output of
rpm -q podman
orapt list podman
orbrew info podman
):Have you tested with the latest version of Podman and have you checked Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
btrfs
overlayA working hypothesis is that the podman auto-update gets interrupted by a system reboot, resulting in dangling (corrupted) images. On MicroOS, the
transactional-updates
(system updates) and the podman auto-updates start times are randomized (i.e. systemd units withRandomizedDelaySec
in place), so there is the chance that the podman auto-update service gets interrupted by a system reboot. I'm running about 8 container at the host, so the vulnerable timeslot would not be negligible.This remains a hypothesis at the moment, as I was unable yet to verify this yet.
The text was updated successfully, but these errors were encountered: