Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to disable cgroup management in pam_systemd.so #13535

Closed
lipixx opened this issue Sep 12, 2019 · 1 comment
Closed

Ability to disable cgroup management in pam_systemd.so #13535

lipixx opened this issue Sep 12, 2019 · 1 comment

Comments

@lipixx
Copy link

lipixx commented Sep 12, 2019

systemd version the issue has been seen with

master

Used distribution

RHEL7 , 8

Expected behaviour you didn't see

I expected to see how a PAM module can modify the cgroup of a proces without any interference from systemd.

Unexpected behaviour you saw

After my pam module ran, I saw systemd modifying the cgroup.

Steps to reproduce the problem
Enable a pam module in ssh which puts the ssh process into a cgroup, and at the same time keep systemd.so module enabled in pam.

Well, the issue is that when you are running an environment where you want to control the remote ssh sessions and put them into a container when the user logs in, and at the same time you want the feature of having XDG_ env var and directories, you need to enable pam_systemd.so, which will just move your ssh process into its own cgroup, just overwritting my pam module. I can put my pam session module after systemd.so, so it will do the correct thing, but I cannot be sure this is correct in the view of pam_systemd, which seems unreliable.

For example, this is what looks like when I have my custom pam module set after systemd one:
~]$ ssh gamba1
~]$ cat /proc/self/cgroup
11:cpuset:/slurm_gamba1/uid_1000/job_20445/step_extern
10:net_cls,net_prio:/
9:perf_event:/
8:freezer:/slurm_gamba1/uid_1000/job_20445/step_extern
7:pids:/user.slice/user-1000.slice/session-12.scope
6:cpu,cpuacct:/slurm_gamba1/uid_1000/job_20445/step_extern/task_0
5:devices:/slurm_gamba1/uid_1000/job_20445/step_extern
4:blkio:/
3:memory:/slurm_gamba1/uid_1000/job_20445/step_extern/task_0
2:hugetlb:/
1:name=systemd:/user.slice/user-1000.slice/session-12.scope
0::/user.slice/user-1000.slice/session-12.scope

And this is when I don't have my module set:
]$ cat /proc/self/cgroup
11:cpuset:/
10:net_cls,net_prio:/
9:perf_event:/
8:freezer:/
7:pids:/user.slice/user-1000.slice/session-13.scope
6:cpu,cpuacct:/
5:devices:/user.slice
4:blkio:/
3:memory:/user.slice/user-1000.slice/session-13.scope
2:hugetlb:/
1:name=systemd:/user.slice/user-1000.slice/session-13.scope
0::/user.slice/user-1000.slice/session-13.scope

I don't want systemd to manage cgroups in pam, and I thing this is a bug, but I still want the XDG_* thing.

Is it possible to split the module in two, or just add an option like:

session optional pam_systemd.so cgroup_mgmt=no

Or otherwise, is it reliable to move the process into another cgroup when systemd.so has already moved it into its own? What should I expect to go wrong if I do so?

@poettering
Copy link
Member

Sorry but this is not and will not be supported. Systemd takes possession of the cgroup root. Other programs may only make direct modifications to the tree if the requested a delegated subtree which they then fully own. Butnthe toplevel tree is off limits its owned by systemd and nothing else.

This is not a rule systemd made up its mandated by the kernel devs: they want a single writer logic and hats what systemd implements.

Sorry. But this is not going to happen. If you dont like this design please discuss with kernel devs to make cgroups a multi-writer concept but i doubt they will and I think they are right.

This is very well documented btw and has been since a long time.

Closing hence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants