Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support single-uid/gid rootless mode #1651

Closed
3 tasks
llchan opened this issue Oct 15, 2018 · 12 comments
Closed
3 tasks

Support single-uid/gid rootless mode #1651

llchan opened this issue Oct 15, 2018 · 12 comments
Assignees
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless

Comments

@llchan
Copy link
Contributor

llchan commented Oct 15, 2018

kind feature

Description

In a multi-rootless-tenant setup where users are authenticated via NIS, it's not practical to deploy /etc/subuid or /etc/subgid. In those cases, we must run in single-uid/gid mode, at least until there is support for some sort of dynamic subuid assignment.

Off the top of my head, we would need a few changes to support single-uid/gid mode:

  • Don't lchown anything in the downloaded images/layers/etc. Just leave it all owned by the host user. Maybe mount the rootfs read-only if we detect the presence of something owned by a different uid?
  • Require everything in the container to run as container root (nothing really needs to be done for this, it will fail pretty obviously if it tries to use a different user). Technically the container uid does not need to be root (can be the host uid), so we may want to avoid that assumption if possible.
  • Gate this behind a flag (currently there's an env var, but a cli flag would be nice), or maybe enable single-uid mode if the user is not defined in /etc/subuid, but warn the user if they didn't explicitly opt in.

I'm not an expert in the podman codebase, so opening this up as more of a discussion to see what complications may arise.

@rhatdan
Copy link
Member

rhatdan commented Dec 22, 2018

@giuseppe WDYT on this one?

@giuseppe
Copy link
Member

we internally support it for testing but I don't think it should be enabled for all users, as most images will stop working. On the other hand, I don't like to put arbitrary limitations if the users wish so, maybe it can be a configuration for ~/.config/containers/libpod.conf and not something to expose in the CLI

@AkihiroSuda
Copy link
Collaborator

Don't lchown anything in the downloaded images/layers/etc. Just leave it all owned by the host user.

I suggest setting user.rootlesscontainers xattr instead of lchown.
Our fork of PRoot can emulate file permission using the xattr (with significant ptrace overhead): https://github.com/rootless-containers/proto

@rhatdan
Copy link
Member

rhatdan commented Mar 8, 2019

@AkihiroSuda @giuseppe What is going on with this issue?

@jamescassell
Copy link
Contributor

Would be valuable in an Active Directory environment.

@rhatdan
Copy link
Member

rhatdan commented Mar 8, 2019

We need to get /etc/subuid and /etc/subgid into nsswitch so it can be managed in ldap, Active Directory/IPA.
Ton's of images will not work without having more then one UID.

@llchan
Copy link
Contributor Author

llchan commented Mar 8, 2019

I don't disagree that tons of images won't work with a single uid, but there are also simple use cases where we'd like to use podman as a glorified chroot running a single process as the existing user. In those cases, everything inside the image should be owned by that user, and arguably the container user should also remain that user (rather than becoming root in the container).

@jamescassell
Copy link
Contributor

How hard would it be to automatically squash all the uids in the image down to a single one?

@llchan
Copy link
Contributor Author

llchan commented Mar 9, 2019

I don't think there needs to be any uid squashing, it just needs to skip the lchown after the image layers are downloaded.

@rhatdan
Copy link
Member

rhatdan commented Mar 9, 2019

As the image is pulled they are lchowned not afterwards. The UIDs are set at while the image is being installed. I have no problem with allowing any UID Range. And throwing errors when UID's are chosen outside of the range. Currently podman is requiring 65000 or so UIDs, which I think is wrong. We should just go with whatever is defined in /etc/subuid and /etc/subgid and setup the usernamespace + the users UID. If there is nothing in /etc/subuid and /etc/subgid, we should just warn and continue.

@giuseppe
Copy link
Member

giuseppe commented Mar 9, 2019

I will work on this feature, since so many users are asking for it. We should improve the errors in containers/storage to detect if the id is not available and error out if the image requires so.

giuseppe added a commit to giuseppe/libpod that referenced this issue Mar 11, 2019
we were playing safe and not allowed any container to have less than
65536 mappings.  There are a couple of reasons to change it:

- it blocked libpod to work in an environment where
  newuidmap/newgidmap are not available, or not configured.

- not allowed to use different partitions of subuids, where each user
  has less than 65536 ids available.

Hopefully this change in containers/storage:

containers/storage#303

will make error clearers if there are not enough IDs for the image
that is being used.

Closes: containers#1651

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/libpod that referenced this issue Mar 11, 2019
we were playing safe and not allowed any container to have less than
65536 mappings.  There are a couple of reasons to change it:

- it blocked libpod to work in an environment where
  newuidmap/newgidmap are not available, or not configured.

- not allowed to use different partitions of subuids, where each user
  has less than 65536 ids available.

Hopefully this change in containers/storage:

containers/storage#303

will make error clearers if there are not enough IDs for the image
that is being used.

Closes: containers#1651

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@giuseppe
Copy link
Member

PR here: #2604

giuseppe added a commit to giuseppe/libpod that referenced this issue Mar 11, 2019
we were playing safe and not allowed any container to have less than
65536 mappings.  There are a couple of reasons to change it:

- it blocked libpod to work in an environment where
  newuidmap/newgidmap are not available, or not configured.

- not allowed to use different partitions of subuids, where each user
  has less than 65536 ids available.

Hopefully this change in containers/storage:

containers/storage#303

will make error clearers if there are not enough IDs for the image
that is being used.

Closes: containers#1651

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/libpod that referenced this issue Mar 11, 2019
we were playing safe and not allowed any container to have less than
65536 mappings.  There are a couple of reasons to change it:

- it blocked libpod to work in an environment where
  newuidmap/newgidmap are not available, or not configured.

- not allowed to use different partitions of subuids, where each user
  has less than 65536 ids available.

Hopefully this change in containers/storage:

containers/storage#303

will make error clearers if there are not enough IDs for the image
that is being used.

Closes: containers#1651

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
giuseppe added a commit to giuseppe/libpod that referenced this issue Mar 11, 2019
we were playing safe and not allowed any container to have less than
65536 mappings.  There are a couple of reasons to change it:

- it blocked libpod to work in an environment where
  newuidmap/newgidmap are not available, or not configured.

- not allowed to use different partitions of subuids, where each user
  has less than 65536 ids available.

Hopefully this change in containers/storage:

containers/storage#303

will make error clearers if there are not enough IDs for the image
that is being used.

Closes: containers#1651

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
muayyad-alsadi pushed a commit to muayyad-alsadi/libpod that referenced this issue Apr 21, 2019
we were playing safe and not allowed any container to have less than
65536 mappings.  There are a couple of reasons to change it:

- it blocked libpod to work in an environment where
  newuidmap/newgidmap are not available, or not configured.

- not allowed to use different partitions of subuids, where each user
  has less than 65536 ids available.

Hopefully this change in containers/storage:

containers/storage#303

will make error clearers if there are not enough IDs for the image
that is being used.

Closes: containers#1651

Signed-off-by: Giuseppe Scrivano <gscrivan@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. rootless
Projects
None yet
Development

No branches or pull requests

6 participants