-
-
Notifications
You must be signed in to change notification settings - Fork 17.5k
nixos/users-groups: don't allocate coinciding subuid ranges with autoSubUidGidRange #386501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
For systems which currently have coinciding subuid ranges, the current code will silently assign new ranges. There are two ways I can think of to handle this:
I'd prefer the first option, since
However, if not breaking existing systems under any circumstances is important, I'd be happy to implement the second one. |
|
I agree going with option number 1 is the best option here, and gate the warning behind Code changes LGTM. I've been trying this PR out on my systems (all non-affected) and haven't had any issues. |
5183a94 to
5c3ecb9
Compare
|
I've added the warning without the Are you OK with the text? |
5c3ecb9 to
293018e
Compare
293018e to
73925bf
Compare
|
I rebased this PR and added a release notes entry. |
leona-ya
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me and I also tried this on an affected machine. I added a release notes entry for more clarity.
…SubUidGidRange Previously, when adding a new user when at least two users already exist, this new user was assigned the same subuid range as the second existing user.
73925bf to
ee4fc8a
Compare
|
As discussed on matrix: moving this via staging to avoid hurting hydra even more with that many nixos tests changed. Also thank you very much for the contribution! |
| to review the new defaults and description of | ||
| [](#opt-services.nextcloud.poolSettings). | ||
|
|
||
| - In `users.users` allocation on systems with multiple users it could happen that collided with others. Now these users get new subuid ranges assigned. When this happens, a warning is issued on the first activation. If the subuids were used (e.g. with rootless container managers like podman), please change the ownership of affected files accordingly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@leona-ya I feel like a few words are missing in "it could happen that collided with others" – should this say something like "it could happen that some users' allocated subuid ranges collided with others"? (I am trying to keep up with release notes and did not at first understand what this was trying to say)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right. A better sentence should be
In users.users subuid allocation on systems with multiple users it could happen that some users' allocated subuid ranges collided with others.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened #408018 – thank you!
Previously, when adding a new user when at least two users already exist, this new user was assigned the same subuid range as the second existing user.
The removal of
$used->{$id} = 1;fromallocIdis necessary sinceallocSubUidis called unconditionally for auto-allocated subuids, while uid/gid assignments do not callalloc*for existing ids. As it's purpose is, as far as i can determine, to avoid repeated calls togetpwuid/getgrgid, this shouldn't have much of an impact (only when adding more than one user/group at a time).Things done
nix.conf? (See Nix manual)sandbox = relaxedsandbox = truenix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD". Note: all changes have to be committed, also see nixpkgs-review usage./result/bin/)Add a 👍 reaction to pull requests you find important.