Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: gracefully fallback to other drivers when loading the provided one fails #1457

Closed
dcermak opened this issue Jan 2, 2023 · 16 comments · Fixed by #1460
Closed

Proposal: gracefully fallback to other drivers when loading the provided one fails #1457

dcermak opened this issue Jan 2, 2023 · 16 comments · Fixed by #1460

Comments

@dcermak
Copy link
Contributor

dcermak commented Jan 2, 2023

c/storage will pick a new driver by name in

func New(name string, config Options) (Driver, error) {

In certain cases the situation could arise that a given driver is requested, but it is not a valid choice, e.g. the system dictates to use the zfs driver, but $HOME is on anything but zfs.

It would be simple in principle to allow for such a case and gracefully fall back to a supported driver, but that could be quite unexpected to API consumers.

Would this be something that could be added? If yes, what would you think for such a change to be the best place?

@vrothberg
Copy link
Member

but that could be quite unexpected to API consumers.

Yes, I agree. If the user explicitly requested a specific driver, I think that we must error out.

If the user doesn't request a specific driver, c/storage already picks a working and reasonable default.

@dcermak
Copy link
Contributor Author

dcermak commented Jan 3, 2023

So if podman would want to gracefully fall back or pick a new driver, it would be up to podman to implement such behavior then?

@vrothberg
Copy link
Member

So if podman would want to gracefully fall back or pick a new driver, it would be up to podman to implement such behavior then?

No, Podman should error out as well. Maybe I am misreading the proposal but if the user does a podman --storage-driver=foobar and podman would (silently) pick btrfs because foobar doesn't work, it would be a kind of magic that may be OK for some users but would be a break for others.

@dcermak
Copy link
Contributor Author

dcermak commented Jan 3, 2023

No, this is more for the case when e.g. the system wide driver is set to btrfs/zfs via /etc/containers/storage.conf, but your home partition is on xfs. It would be nice if podman would then just display a warning and gracefully fallback to the best working driver for rootless containers (would be overlay I guess?).

If you explicitly ask for a driver, then repicking shouldn't happen imho.

@vrothberg
Copy link
Member

If you explicitly ask for a driver, then repicking shouldn't happen imho.

If /etc/containers/storage.conf has a specific driver set, then the admin or user have changed it which to me is the same as --storage-driver on the CLI.

@giuseppe WDYT?

@dcermak
Copy link
Contributor Author

dcermak commented Jan 3, 2023

If you explicitly ask for a driver, then repicking shouldn't happen imho.

If /etc/containers/storage.conf has a specific driver set, then the admin or user have changed it which to me is the same as --storage-driver on the CLI.

/etc/containers/storage.conf could also have a specific driver set by the distribution vendor, which is the case that I am trying to address here.

@giuseppe
Copy link
Member

giuseppe commented Jan 3, 2023

If /etc/containers/storage.conf has a specific driver set, then the admin or user have changed it which to me is the same as --storage-driver on the CLI.

@giuseppe WDYT?

as a user, I also expect that. If the storage-driver is specified in /etc/containers/storage.conf then I expect it is either honored or an error happens.

If anything, it should be a different setting IMO, something like preferred_storage_driver, even better if it is a list of drivers, that are used when storage-driver is unset.

Side question: any reason to default to zfs?

@dcermak
Copy link
Contributor Author

dcermak commented Jan 3, 2023

If anything, it should be a different setting IMO, something like preferred_storage_driver, even better if it is a list of drivers, that are used when storage-driver is unset.

That sounds like a viable option to me.

Side question: any reason to default to zfs?

That was just a random example. In my case the driver was actually btrfs that is being used as the default storage on openSUSE/SLE when / is btrfs. But that can cause the aforementioned issue when the user puts their $HOME on anything that is not btrfs.

A preferred storage driver or a priority list sounds like a good solution for this problem. Would c/storage be the correct place way to add this?

@vrothberg
Copy link
Member

Isn't c/storage picking the preferred/right driver automatically? In that case, distributions don't need to hard-code it in their configs.

@dcermak
Copy link
Contributor Author

dcermak commented Jan 3, 2023

Isn't c/storage picking the preferred/right driver automatically? In that case, distributions don't need to hard-code it in their configs.

No, it picks the driver according to its own priority list (if the driver is not set):

priority = []string{

However, there is no way for a distro to change that priority list.

@vrothberg
Copy link
Member

Making that list configurable SGTM. The changes would be minimal and there's no hidden magic.

👍

@rhatdan
Copy link
Member

rhatdan commented Jan 3, 2023

I would prefer not to rely on the priority list. Having a fallback driver in storage.conf would be preferable.

@vrothberg
Copy link
Member

I would prefer not to rely on the priority list.

Can you elaborate?

Having a fallback driver in storage.conf would be preferable.

What if that doesn't suffice and users want a third option? To me, making the priority list configurable covers all there is to cover.

@rhatdan
Copy link
Member

rhatdan commented Jan 3, 2023

I guess doing that would be fine, but I think the need for a third option is a really small number.

I did not like the priority list and prefer users to always specify "overlay" or something else.

dcermak added a commit to dcermak/storage that referenced this issue Jan 3, 2023
This fixes containers#1457

Signed-off-by: Dan Čermák <dcermak@suse.com>
dcermak added a commit to dcermak/storage that referenced this issue Jan 3, 2023
This fixes containers#1457

Signed-off-by: Dan Čermák <dcermak@suse.com>
@dcermak
Copy link
Contributor Author

dcermak commented Jan 3, 2023

To be honest I find the configurable priority list to be a more elegant solution than to have a default and a fallback. A default and a fallback would result in more or less the same code as a priority list, only imho being less flexible. Also, I personally don't see the big advantage of restricting the user to a default and a fallback when they get a lot more fallbacks out of the box.

Also, this is a pretty niche/specific setting. So allowing a distribution vendor/poweruser more flexibility is imho fine, as they should be knowing what they are doing.

@rhatdan
Copy link
Member

rhatdan commented Jan 3, 2023

Anyways please open a PR to make the change.

dcermak added a commit to dcermak/storage that referenced this issue Jan 3, 2023
This fixes containers#1457

Signed-off-by: Dan Čermák <dcermak@suse.com>
dcermak added a commit to dcermak/storage that referenced this issue Jan 3, 2023
This fixes containers#1457

Signed-off-by: Dan Čermák <dcermak@suse.com>
dcermak added a commit to dcermak/storage that referenced this issue Jan 4, 2023
This fixes containers#1457

Co-authored-by: Valentin Rothberg <vrothberg@redhat.com>
Signed-off-by: Dan Čermák <dcermak@suse.com>
dcermak added a commit to dcermak/storage that referenced this issue Jan 4, 2023
This fixes containers#1457

Co-authored-by: Valentin Rothberg <vrothberg@redhat.com>
Signed-off-by: Dan Čermák <dcermak@suse.com>
dcermak added a commit to dcermak/storage that referenced this issue Jan 5, 2023
This fixes containers#1457

Co-authored-by: Valentin Rothberg <vrothberg@redhat.com>
Signed-off-by: Dan Čermák <dcermak@suse.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants