Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Emergency mode when trying to boot into GTS post rebase #832

Closed
Casserole97 opened this issue Jan 25, 2024 · 10 comments
Closed

Emergency mode when trying to boot into GTS post rebase #832

Casserole97 opened this issue Jan 25, 2024 · 10 comments

Comments

@Casserole97
Copy link

Describe the bug

I rebased to bluefin-dx GTS from bluefin-dx latest following the instruction on https://universal-blue.org/images/
After trying to boot into the new image, it enters emergency mode with the following error:

Cannot open access to console, the root account is locked.
See sulogin(8) for more details.

What did you expect to happen?

I expected to reboot into the new image following the rebase.

Output of rpm-ostree status

State: idle
AutomaticUpdates: stage; rpm-ostreed-automatic.timer: no runs since boot
Deployments:
  ostree-unverified-registry:ghcr.io/ublue-os/bluefin-dx:gts
                   Digest: sha256:703e5c8cd56f08d5cfb3c0209833d8086eff8a174ef887056eeda9857ec2eea6
                  Version: 38.20240124.0 (2024-01-24T16:52:45Z)
                     Diff: 1 upgraded, 1786 downgraded, 135 removed, 67 added

● ostree-image-signed:docker://ghcr.io/ublue-os/bluefin-dx:latest
                   Digest: sha256:bf8e19513b72e0c90b8b582db775497cfc7ad469c9f555b1bbd8c1df0a19a095
                  Version: 39.20240124.0 (2024-01-24T16:52:28Z)

  ostree-image-signed:docker://ghcr.io/ublue-os/bluefin-dx:latest
                   Digest: sha256:bf8e19513b72e0c90b8b582db775497cfc7ad469c9f555b1bbd8c1df0a19a095
                  Version: 39.20240124.0 (2024-01-24T16:52:28Z)
          LayeredPackages: iosevka-aile-fonts iosevka-fonts
                   Pinned: yes

Extra information or context

Here is how it looks:
error

@bsherman
Copy link
Contributor

Since you have output from output of rpm-ostree status... does that mean you were able to reboot into the previous deployment?

If so, if you, can you rollback that rebase to gts?

I've not personally tested this scenario, but what you did here is downgrade your machine from Fedora Bluefin 39 to 38. I don't know why that's specifically a problem, but if I had to guess, I'd say the issue is related to doing that downgrade.

@castrojo
Copy link
Member

I routinely rebase machines back and forth to test things, I think this might be dependant on other factors? I think in a pure bootc world this shouldn't be an issue but it also doesn't solve the issue they're having now. Not sure what's going on here but maybe someone else can help.

@Casserole97
Copy link
Author

@bsherman Yes, I managed to boot into the previous deployment via the boot menu and then rolled back, so the machine is still usable. The only problem is that I can't try out Bluefin GTS😢

@castrojo I assumed the same as @bsherman, that the issue is related to downgrading the version of fedora, but if you're able to switch between versions then I guess it's something else. Unfortunately I can't think of what could cause this, I haven't tinkered much with my installation so it should be close to defaults.

@Mystrain308
Copy link

I did have the same error, i did a fresh install using the main image, and then bluefin latest, then tried rebasing to gts, and boom emergency mode. I solved doing a fresh install using the bluefin 38 image and then rebasing to GTS, if you want to try GTS this is a way to do.

@Casserole97
Copy link
Author

After what @Mystrain308 said I tried these steps in a VM and it resulted in the same error:

  1. Installed the latest silverblue release (39-1.5)
  2. Updated with rpm-ostree update and rebooted into the new deployment
  3. Removed all mutations with rpm-ostree reset and rebooted into the new deployment
  4. Rebased to bluefin-dx:gts and rebooted into the new deployment
  5. Got met with the same error

Screenshot from 2024-01-26 11-16-09

So it seems it may be related to downgrading Fedora versions? Unfortunately, I don't know how to check what's causing this problem. 🤷

@bsherman
Copy link
Contributor

@Casserole97 thank you for reproducing it so clearly. I'm curious about this too.

Since some people have successfully downgraded, I'm curious if it matters which version is initially installed.

I hope to experiment a bit with this and see if we can track this down.

@bsherman
Copy link
Contributor

bsherman commented Jan 26, 2024

Ok, I tested this.

I've confirmed that a fresh install of Silverblue 39, then a downgrade rebase to Silverblue 38 results in this same error. It has nothing to due with bluefin specifically, nor ostree native containers.

I'll next test to see if an install of 38, upgraded to 39, can then downgrade again. My suspicion is that will work, (since it seems to work for some people).

@castrojo
Copy link
Member

Do you think it depends on the state of 38/39 at the time of downgrading?

@bsherman
Copy link
Contributor

Do you think it depends on the state of 38/39 at the time of downgrading?

Testing proves this: if I first install Silverblue 38, then upgrade to Silverblue 39... even if I clear out old deployments and reboot so that there is no F38 deployments left... I can still rebase back to Siliverblue 38 and booting is successful.

But if I start with a Silverblue 39 install, I cannot downgrade.

I haven't found the root cause but if I had to guess, there's something in /var or /etc which was dropped in F39 but needed by F38. So any machine first installed with F38 has it, but not F39 machines.

Bottom line, this is an upstream issue with Silverblue... but... I don't really think it's officially supported to downgrade. Just wanted to understand this as well as we could.

@castrojo
Copy link
Member

Yeah if this is the case then I'll update the docs to be more upfront about this!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jun 25, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 2, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jul 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants