New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create new SELinux policy for kubevirt project #87
Conversation
container.te
Outdated
| dev_rw_mtrr(virt_launcher_t) | ||
| dev_rw_sysfs(virt_launcher_t) | ||
|
|
||
| virt_sandbox_net_domain(virt_launcher_t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer if you added the container_domain to this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I understand what you meant by this, the parent (container_domain) of this is the domain that should be added to sandbox_net_domain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhatdan, We cannot assign attribute to attribute in m4 :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No I am suggesting that
typeattribute virt_launcer_t, container_domain;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Policy updated.
| manage_dirs_pattern(virt_launcher_t, container_var_run_t, container_var_run_t) | ||
| manage_files_pattern(virt_launcher_t, container_var_run_t, container_var_run_t) | ||
|
|
||
| manage_dirs_pattern(virt_launcher_t, container_share_t, container_share_t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does it need this access?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think this line is useful. I removed it from the module in kubevirt/kubevirt#2934 -- which is still pending merge, but the fate of that PR is tied to this one.
I inspected the filesystems on random virt-launcher and virt-handler containers. The container_share_t label was not present at all. If it were present (and we really needed write access to it), that's much more likely to be properly fixed with relabelling than adding an allow-write rule to a type that's supposed to be read-only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As this is temporary solution, I allowed what was in previous cil policy. @fabiand mentioned that they will work on KubeVirt project to drop some access to make these containers more secure. No problem I can remove this line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can create a type for virt-launcher to create in the policy than it would create that type when it runs in the /var/run directory.
type virt_launcher_var_run_t;
files_pid_file(virt_launcher_var_run_t)
files_pid_filetrans(virt_launcher_t, virt_launcher_var_run_t, { dir file lnk_file sock_file })
files_trans(virt_launcher_t, container_var_run_t, virt_launcher_var_run_t, dir)
manage_files_pattern(virt_launcher_t, virt_launcher_var_run_t, virt_launcher_var_run_t)
manage_lnk_files_pattern(virt_launcher_t, virt_launcher_var_run_t, virt_launcher_var_run_t)
manage_dirs_pattern(virt_launcher_t, virt_launcher_var_run_t, virt_launcher_var_run_t)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a file context
/var/run/kubevirt-private(/.*)? gen_context(system_u:object_r:virt_launcher_var_run_t,s0)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could do similar for the content created in /tmp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed in the commit.
container.te
Outdated
| manage_files_pattern(virt_launcher_t, container_share_t, container_share_t) | ||
|
|
||
| manage_dirs_pattern(virt_launcher_t, container_runtime_tmp_t, container_runtime_tmp_t) | ||
| manage_files_pattern(virt_launcher_t, container_runtime_tmp_t, container_runtime_tmp_t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these runtime files being added to the container?
Do we know what command they are executing to launch the containers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some time ago we used to use /tmp/healthy for file-based health monitoring--we don't anymore. I'm wondering if this is cruft left over from that.
I'm currently verifying if there's any other consequences to removing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, this probably could be removed also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested removing this line and lots of tests failed. Oddly they failed on access to files such as this:
/var/run/kubevirt-private/vmi-disks/disk0/disk.img
Which is probably a secondary effect as that is a bindmounted elsewhere. Would it be ok to leave this rule in place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow-up: the above is a side-effect. CDI's upload server still uses /tmp/healthy to report status. Either the startup script is crashing because it can't write there, or k8s is crashlooping the pod because it never reports as healthy.
One KubeVirt component still does require read/write access to /tmp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just label the content in /var/run/kubevert-private to container_file_t, or volume mount it into the container engine with the :Z or :z options?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with /var/run/kubevirt-private was a red herring. the real issue was that the CDI upload server failed to start when I removed permissions to write to /tmp.
container.te
Outdated
|
|
||
| allow virt_launcher_t self:tun_socket { attach_queue relabelfrom relabelto create_socket_perms }; | ||
|
|
||
| manage_dirs_pattern(virt_launcher_t, container_var_run_t, container_var_run_t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks wrong, is this a labeling issue? IE are we mounting content in /run created by the container runtime, without a :Z?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/var/run/kubevirt-infra/healthy is being written to by the virt-launcher process. Since that's technically a new file created after the container is running, would that be why this is needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@stu-gott are you bind mounting this file to the container space?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I believe so. virt-handler reports health on behalf of virt-launcher (which is isolated from cluster resources), thus the bind mount.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rhatdan do we have something like :Z for bindmounts for kubernetes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add this to the destination, I believe, and then CRI-O will relabel the content.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better if we left KubeVirt as-is for the moment. Otherwise we get into this weird situation where we need a compatibility matrix to know what versions of kubevirt will run on what versions of RHCOS.
|
@fabiand could you please look on comments from Dan? I'm not sure what's going on inside a container. |
|
Hi @stu-gott , |
|
@rhatdan @wrabcak thanks for the feedback. I do agree with Stu that I'd prefer to see the policies taken over as they are, because this is how the exist in upstream KubeVirt. Thus I'd really appreciate if we can merge it as is and get rid of it in 1year again. |
|
I understand the pressure to merge the PR. However, @rhatdan needs to decide. Thanks, |
|
If you would clarify which directories kubervirt needs to write to, then we could do a much better job at securing the container. If we are going to just allow it to write all over the system, then we should just run it as spc_t. |
|
I understand. Sadly - and it's not unwillingness - I am just not able
to provide the relevant data.
Thus I thank for teh support, but I suppose that we should then just
go with spc_t and focus on the other side to go with svirt.
|
9b8acc2
to
8d594b8
Compare
New container type "virt_launcher_t" should allow running VMs inside containers. Solution is provided as workaround for following bugzilla tickets: https://bugzilla.redhat.com/show_bug.cgi?id=1795975 https://bugzilla.redhat.com/show_bug.cgi?id=1795964
8d594b8
to
9a1f11a
Compare
|
What is the current state @fabiand are you using spc_t ? |
|
KubeVirt was using spc_t, but then came up with it's custom linux policy.
This will work today for non RHCOS/FCOS hosts.
RedHat's CNV is using spc_t (based on the KubeVirt code which was also
using spc_t), and will now continue to use spc_t until we are able to
switch to svirt_t.
The intermediate steps - like adding our custom policy to container-selinux
- are just a little difficult to do atm.
|
|
Understand. @rhatdan Could we close the PR? |
|
Sure, but we are also having a lot of conversations about how to support kata containers, which might have similar issues to kubevirt. Basically we need a lvm_container_t, that is able to run as a VM and able to read/write container_file_t content. |
|
@rhatdan , Understand. We should open some Issue on this project where we could discuss it. I don't think this is a good place for discussion about kata containers. Thanks, |
|
We now have container_kvm_t, which satisfies this need |
|
Just for the record: container_kvm_t is probably not what KubeVirt can use. But now we have switched to an approache where we are shipping our custom type during KubeVirt deployment. |
New container type "virt_launcher_t" should allow running VMs inside
containers.
Solution is provided as workaround for following bugzilla tickets:
https://bugzilla.redhat.com/show_bug.cgi?id=1795975
https://bugzilla.redhat.com/show_bug.cgi?id=1795964