Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create new SELinux policy for kubevirt project #87

Closed
wants to merge 1 commit into from

Conversation

wrabcak
Copy link
Member

@wrabcak wrabcak commented Jan 29, 2020

New container type "virt_launcher_t" should allow running VMs inside
containers.

Solution is provided as workaround for following bugzilla tickets:
https://bugzilla.redhat.com/show_bug.cgi?id=1795975
https://bugzilla.redhat.com/show_bug.cgi?id=1795964

container.te Outdated
dev_rw_mtrr(virt_launcher_t)
dev_rw_sysfs(virt_launcher_t)

virt_sandbox_net_domain(virt_launcher_t)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer if you added the container_domain to this?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if I understand what you meant by this, the parent (container_domain) of this is the domain that should be added to sandbox_net_domain?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rhatdan, We cannot assign attribute to attribute in m4 :/

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I am suggesting that
typeattribute virt_launcer_t, container_domain;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Policy updated.

manage_dirs_pattern(virt_launcher_t, container_var_run_t, container_var_run_t)
manage_files_pattern(virt_launcher_t, container_var_run_t, container_var_run_t)

manage_dirs_pattern(virt_launcher_t, container_share_t, container_share_t)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does it need this access?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think this line is useful. I removed it from the module in kubevirt/kubevirt#2934 -- which is still pending merge, but the fate of that PR is tied to this one.

I inspected the filesystems on random virt-launcher and virt-handler containers. The container_share_t label was not present at all. If it were present (and we really needed write access to it), that's much more likely to be properly fixed with relabelling than adding an allow-write rule to a type that's supposed to be read-only.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this is temporary solution, I allowed what was in previous cil policy. @fabiand mentioned that they will work on KubeVirt project to drop some access to make these containers more secure. No problem I can remove this line.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can create a type for virt-launcher to create in the policy than it would create that type when it runs in the /var/run directory.

type virt_launcher_var_run_t;
files_pid_file(virt_launcher_var_run_t)
files_pid_filetrans(virt_launcher_t, virt_launcher_var_run_t, { dir file lnk_file sock_file })
files_trans(virt_launcher_t, container_var_run_t, virt_launcher_var_run_t, dir)
manage_files_pattern(virt_launcher_t, virt_launcher_var_run_t, virt_launcher_var_run_t)
manage_lnk_files_pattern(virt_launcher_t, virt_launcher_var_run_t, virt_launcher_var_run_t)
manage_dirs_pattern(virt_launcher_t, virt_launcher_var_run_t, virt_launcher_var_run_t)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a file context
/var/run/kubevirt-private(/.*)? gen_context(system_u:object_r:virt_launcher_var_run_t,s0)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do similar for the content created in /tmp.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed in the commit.

container.te Outdated
manage_files_pattern(virt_launcher_t, container_share_t, container_share_t)

manage_dirs_pattern(virt_launcher_t, container_runtime_tmp_t, container_runtime_tmp_t)
manage_files_pattern(virt_launcher_t, container_runtime_tmp_t, container_runtime_tmp_t)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these runtime files being added to the container?
Do we know what command they are executing to launch the containers?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some time ago we used to use /tmp/healthy for file-based health monitoring--we don't anymore. I'm wondering if this is cruft left over from that.

I'm currently verifying if there's any other consequences to removing this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, this probably could be removed also.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested removing this line and lots of tests failed. Oddly they failed on access to files such as this:

/var/run/kubevirt-private/vmi-disks/disk0/disk.img

Which is probably a secondary effect as that is a bindmounted elsewhere. Would it be ok to leave this rule in place?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow-up: the above is a side-effect. CDI's upload server still uses /tmp/healthy to report status. Either the startup script is crashing because it can't write there, or k8s is crashlooping the pod because it never reports as healthy.

One KubeVirt component still does require read/write access to /tmp.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just label the content in /var/run/kubevert-private to container_file_t, or volume mount it into the container engine with the :Z or :z options?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue with /var/run/kubevirt-private was a red herring. the real issue was that the CDI upload server failed to start when I removed permissions to write to /tmp.

container.te Outdated

allow virt_launcher_t self:tun_socket { attach_queue relabelfrom relabelto create_socket_perms };

manage_dirs_pattern(virt_launcher_t, container_var_run_t, container_var_run_t)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks wrong, is this a labeling issue? IE are we mounting content in /run created by the container runtime, without a :Z?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/var/run/kubevirt-infra/healthy is being written to by the virt-launcher process. Since that's technically a new file created after the container is running, would that be why this is needed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@stu-gott are you bind mounting this file to the container space?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I believe so. virt-handler reports health on behalf of virt-launcher (which is isolated from cluster resources), thus the bind mount.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rhatdan do we have something like :Z for bindmounts for kubernetes?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add this to the destination, I believe, and then CRI-O will relabel the content.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better if we left KubeVirt as-is for the moment. Otherwise we get into this weird situation where we need a compatibility matrix to know what versions of kubevirt will run on what versions of RHCOS.

@wrabcak
Copy link
Member Author

wrabcak commented Jan 29, 2020

@fabiand could you please look on comments from Dan? I'm not sure what's going on inside a container.
Thanks,
Lukas.

@stu-gott
Copy link

@wrabcak I'm currently verifying whether or not we still need read/write access to /tmp, but other than that, I've added my insights to @rhatdan's questions/observations. If there's still further questions/clarifications please don't hesitate to ask.

@wrabcak
Copy link
Member Author

wrabcak commented Jan 30, 2020

Hi @stu-gott ,
I also put some comments. Please review.

@fabiand
Copy link

fabiand commented Feb 4, 2020

@rhatdan @wrabcak thanks for the feedback. I do agree with Stu that I'd prefer to see the policies taken over as they are, because this is how the exist in upstream KubeVirt.
Please bear in mind that this is to enable KubeVirt initially, the long term approach is to move to svirt which @enp0s3 is working on.

Thus I'd really appreciate if we can merge it as is and get rid of it in 1year again.

@wrabcak
Copy link
Member Author

wrabcak commented Feb 4, 2020

I understand the pressure to merge the PR. However, @rhatdan needs to decide.

Thanks,
Lukas.

@rhatdan
Copy link
Member

rhatdan commented Feb 5, 2020

If you would clarify which directories kubervirt needs to write to, then we could do a much better job at securing the container. If we are going to just allow it to write all over the system, then we should just run it as spc_t.

@fabiand
Copy link

fabiand commented Feb 5, 2020 via email

@wrabcak
Copy link
Member Author

wrabcak commented Feb 6, 2020

I updated the PR based on comments from @rhatdan and @stu-gott .

New container type "virt_launcher_t" should allow running VMs inside
containers.

Solution is provided as workaround for following bugzilla tickets:
https://bugzilla.redhat.com/show_bug.cgi?id=1795975
https://bugzilla.redhat.com/show_bug.cgi?id=1795964
@wrabcak
Copy link
Member Author

wrabcak commented Feb 14, 2020

What is the current state @fabiand are you using spc_t ?

@fabiand
Copy link

fabiand commented Feb 14, 2020 via email

@wrabcak
Copy link
Member Author

wrabcak commented Feb 14, 2020

Understand.

@rhatdan Could we close the PR?

@rhatdan
Copy link
Member

rhatdan commented Feb 14, 2020

Sure, but we are also having a lot of conversations about how to support kata containers, which might have similar issues to kubevirt. Basically we need a lvm_container_t, that is able to run as a VM and able to read/write container_file_t content.

@wrabcak
Copy link
Member Author

wrabcak commented Feb 15, 2020

@rhatdan , Understand. We should open some Issue on this project where we could discuss it. I don't think this is a good place for discussion about kata containers.

Thanks,
Lukas.

@rhatdan
Copy link
Member

rhatdan commented Mar 29, 2020

We now have container_kvm_t, which satisfies this need

@rhatdan rhatdan closed this Mar 29, 2020
@fabiand
Copy link

fabiand commented Jun 9, 2020

Just for the record: container_kvm_t is probably not what KubeVirt can use. But now we have switched to an approache where we are shipping our custom type during KubeVirt deployment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants