-
Notifications
You must be signed in to change notification settings - Fork 236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
crc start issues after forceful shutdown #325
Comments
@giuseppe any thoughts on this? We use an RHCOS image |
@gbraad that version of podman is really old and a lot of changes/improvements went into. The issue you've linked is related to rootless containers, even if the error message looks similar. Looks like the storage is corrupted (in this case missing symlinks), because of the forced shutdown. I'd suggest to remove that image and re-pull it again. |
@giuseppe this is the version of podman which is shipped on the RH coreos image openshift uses (and this is also the default version of podman in rhel8.0). |
Alternatively we have to stop the container BEFORE doing a stop of the VM, but that sounds like a workaround to an issue that can happen outside of CRC. |
repulling the image is not an option, as we need to ensure we can start from a disconnected state (no guarantee that we can pull an image from a remote reqistry on the internet, like quay). Alternatively we can export the image and place the archive inside the VM, so we can always re-import it. But again, this sounds like a workaround for an issue with the podman version delivered with RHCOS(?). @ashcrow Are newer versions of podman considered or available for use with RHCOS? |
soon, it'll be @jnovy |
Works for me! Thanks @jnovy. |
Tested a 4.2.0-rc.2 image, and could not reproduce the issue, so it's probably fixed there by the upgrade to a newer podman version. |
Closing this, we can reopen if the issue reoccurs |
I've seen this on hyperkit, when the VM is forcefully shutdown (for example if one interrupts the 3 minutes wait for the cluster to be up), then next start fails with
The dnsmasq local image is indeed in an odd state
The podman version on our image is old:
This may or may not be related to containers/podman#3345 (comment)
The text was updated successfully, but these errors were encountered: