-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update images #311
update images #311
Conversation
Cirrus CI build successful. Found built image names and IDs:
|
5e51325
to
d33ecc1
Compare
Requires emergency override of containers.conf SNAFU with zstd:chunked containers/common#1730 Signed-off-by: Ed Santiago <santiago@redhat.com>
@cevich if you have a spare moment could you look at the fedora-aws Base Image failure please?
I can't find the string "Waiting for the instance to stop" anywhere in the likely source trees, so I have no idea what is running or what the bug is. FWIW, the python-3.12 bug is a red herring, my last build threw the same error, but worked anyway. |
The podman-py stuff I believe Urvashi sorted out. There's an actual bug in pylint and she found a workaround. The error you got is coming from Packer. I've seen similar things before, it looks like a flake to me. It probably orphaned a VM (we can worry about that later). I restarted the task and will keep an eye on it as I'm able today... |
...uggg. Amazon is having a bad day, re-running again... |
It doesn't seem to be a flake. I restarted it four times yesterday. |
I don't think we've changed the packer version recently, so it must be something on the Amazon side. Perhaps triggering a bug in packer. In my last attempt, I found the line: I looked that instance up on the AWS EC2 console, and it shows the status as "terminated" - which is correct. I found a few other instances in a "stopped" state, that shouldn't happen. If you'd like to try figuring and bumping up the packer timeout, that may get you past the hump. Otherwise we may need a newer version (which may not accept our current |
Looking again:
I bet amazon changed some timings on their end. Such that (for example) it tries an ACPI shutdown, waits, tries again, waits, then "yanks the plug". If the timings of any of that collide with what packer is expecting, we'd get this problem. It's highly-likely there's a timeout setting for this, probably needs to be added to the |
If we need to dig deeper, there are options here as well. AWS keeps a log of basically every API request per user. So it's pretty easy to see if and when the request came in. In this case, it does look like a |
Closing in favor of #312. Hoping all these timeouts and errors go away. |
Signed-off-by: Ed Santiago santiago@redhat.com