Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

agent: Run container workload in its own cgroup namespace (cgroup v2 guest only) #9125

Merged
merged 2 commits into from
Feb 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
11 changes: 6 additions & 5 deletions src/agent/Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions src/agent/rustjail/src/container.rs
Original file line number Diff line number Diff line change
Expand Up @@ -556,6 +556,10 @@ fn do_init_child(cwfd: RawFd) -> Result<()> {

sched::unshare(to_new & !CloneFlags::CLONE_NEWUSER)?;

if cgroups::hierarchies::is_cgroup2_unified_mode() {
sched::unshare(CloneFlags::CLONE_NEWCGROUP)?;
}
Comment on lines +559 to +561
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this isolation required to cgroup v1?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this isolation required to cgroup v1?

Hi Xavier, I was kinda expecting this question 😉

Cgroup v1 doesn't have the problem with /sys/fs/cgroup as the agent bind mounts the appropriate directories in the container.

There is some leaking in /proc/self/cgroup though, as it partially displays details that belong to the guest OS. For example, this what we get inside a kata container on Openshift 4.11 (soon reaching EOL) :

bash-5.2$ cat /proc/self/cgroup 
12:memory:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
11:blkio:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
10:hugetlb:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
9:cpuset:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
8:rdma:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
7:cpu,cpuacct:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
6:devices:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
5:net_cls,net_prio:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
4:pids:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
3:freezer:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
2:perf_event:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4
1:name=systemd:/crio/bfca835403a5c2629942b254fe8d850c069576be14a292ac3cd3a77f9b1958b4

Container should not see that CRI-O is involved, but this is really minor and didn't cause any concern since the beginning.

I did try to unshare the cgroup namespace for cgroup v1 as well for experiment and it resulted in the container not starting. Since cgroup v1 in the guest isn't really my use case, I'll leave it for someone who cares and stick to fix the cgroup v2 experience only in this PR (updated the PR title to make it explicit).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me, thanks!


if userns {
bind_device = true;
}
Expand Down