Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rootless: Enable rootless containers by default with user UIDs/GIDs to 65535 in /etc/subxid #3441

Closed
jwflory opened this issue Jun 26, 2019 · 13 comments

Comments

Projects
None yet
8 participants
@jwflory
Copy link
Contributor

commented Jun 26, 2019

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind feature

Description

It is difficult to enable rootless containers in an environment where there might be hundreds or even a thousand unique users connecting to a host (e.g. HPC clusters). It requires adding all usernames to the /etc/sub{u,g}id files. Maintaining this list of users is difficult and needs caution to avoid overlapping namespaces.

For some environments, it could be useful to support a feature where libpod will fill the /etc/sub{u,g}id files with actual UIDs (vs. login names) and safely create the mappings up to the default maximum value (65535) or maybe a configurable value/range. Using the numerical UIDs seems like one way to enable rootless containers in a very large computing environment without significant configuration work to support it. Using numerical UIDs avoids human error for conflicting namespaces.

Additional information you deem important (e.g. issue happens only occasionally):

This is probably more useful as a non-default configuration option. I only anticipate people looking for this if they know they need it.

Additional environment details (AWS, VirtualBox, physical, etc.):

I expect this use case is highly specific to large-scale HPC environments using software like Slurm or Univa GridEngine supporting large numbers of users across a distributed number of nodes. But it would be interesting to make containers more accessible to this kind of use case, and I don't see an easy way of doing that without rootless containers.

@rhatdan

This comment has been minimized.

Copy link
Member

commented Jun 27, 2019

I have opened bugs to support managing the /etc/subuid and /etc/subgid files via nsswitch, which should get us closer to what you want.

@giuseppe

This comment has been minimized.

Copy link
Member

commented Jun 27, 2019

I think it could be solved in the immediate future if we enhance newuidmap/newgidmap to allow a static assignment for the additional UIDs/GIDs.

It can be easily done by choosing a range of system users that are allowed to get additional IDs, then split equally the remaining IDs to these system users.

Let's say we pick 0-65536 as the system users range, we have 4294901760 IDs more that we can assign. In this setup each user in the 0-65536 gets 65535 additional IDs, starting at 65536.

@ebiederm @hallyn what do you think about this idea? Is it something we could do in shadow-utils?

@jwflory

This comment has been minimized.

Copy link
Contributor Author

commented Jun 27, 2019

I have opened bugs to support managing the /etc/subuid and /etc/subgid files via nsswitch, which should get us closer to what you want.

@rhatdan Do you have links to those bugs I could follow too?

@rhatdan

This comment has been minimized.

Copy link
Member

commented Jun 27, 2019

shadow-maint/shadow#154
Also internal email conversation with glibc developers.

@ebiederm

This comment has been minimized.

Copy link

commented Jun 28, 2019

@giuseppe

This comment has been minimized.

Copy link
Member

commented Jun 28, 2019

Fedora does that automatically as well, every new user gets 65k additional users.

What I am suggesting is to make the mapping function of the UID, not the user name, perhaps enabling this configuration through a new conf file. The mapping function UID -> additional ranges won't require any lookup in the user DB to find the user name and in /etc/subuid to find the ranges allocated to that user.

In this way we will enable something like:

$ sudo chroot --userspec 1100:1100 / sh
sh-5.0$ unshare -U sleep 100 &
[1] 2637
sh-5.0$ newuidmap $! ....
newuidmap: Cannot determine your user name.
@SEJeff

This comment has been minimized.

Copy link

commented Jun 28, 2019

I just attempted (unsuccessfully) to just put uids into /etc/subuid in place of usernames so I could trivially template it for a large range of users. If it supported both, like the id command or something, that should be enough to trivially just auto-generate it.

I like your idea from a pragmatic standpoint @giuseppe.

@hallyn

This comment has been minimized.

Copy link

commented Jul 2, 2019

Hm, supporting uids in /etc/subuid should be simple and I don't see any issues with that. If that helps you, I don't mind accepting or writing a patch to that effect.

@SEJeff

This comment has been minimized.

Copy link

commented Jul 2, 2019

@hallyn my C is a bit rusty but I might take a stab at this. It would be a fun and simple one, thanks!

@giuseppe

This comment has been minimized.

Copy link
Member

commented Jul 2, 2019

Hm, supporting uids in /etc/subuid should be simple and I don't see any issues with that. If that helps you, I don't mind accepting or writing a patch to that effect.

interestingly, it seems to already work that way. We will need to adapt Podman to do the lookup in the same way when requesting a range

@giuseppe

This comment has been minimized.

Copy link
Member

commented Jul 2, 2019

@SEJeff could you try if containers/storage#376 is enough to solve the issue? We need to re-vendor containers/storage in podman

@hallyn

This comment has been minimized.

Copy link

commented Jul 3, 2019

Ok so IIUC this is not an issue here. I'll close this. If I'm wrong, please reply :)

@SEJeff

This comment has been minimized.

Copy link

commented Jul 15, 2019

@hallyn can confirm it is working. Please close. Thanks!

@mheon mheon closed this Jul 15, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.