Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upLimit shared mounts to specific VM (launcher) Pods - cleanup needed #615
Comments
This was referenced Dec 13, 2017
fabiand
added
kind/enhancement
area/launcher
topic/security
labels
Jan 10, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
This one should be fixed after the subresource work, right? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
davidvossel
Apr 4, 2018
Member
we still have the situation where sockets are shared between virt-launcher and virt-handler. This is required for communication between the two components.
However, with the addition of the subresource stream api for console and vnc, there's no reason for a user to need podExec permissions to use KubeVirt.
I'd prefer the VM pod to not need any host shared mounts at some point.
|
we still have the situation where sockets are shared between virt-launcher and virt-handler. This is required for communication between the two components. However, with the addition of the subresource stream api for console and vnc, there's no reason for a user to need podExec permissions to use KubeVirt. I'd prefer the VM pod to not need any host shared mounts at some point. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
fabiand
Apr 4, 2018
Member
Thanks.
Yes, we still have host mounts, and I agree we should get rid of them. Keeping it open.
|
Thanks. Yes, we still have host mounts, and I agree we should get rid of them. Keeping it open. |
fabiand
added this to the
v1.2 milestone
Apr 4, 2018
fabiand
changed the title from
VM Pod shared mount security cleanup
to
Limit shared mounts to specific VM (launcher) Pods - cleanup needed
May 28, 2018
fabiand
self-assigned this
May 28, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
kubevirt-bot
Sep 20, 2018
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
kubevirt-bot
commented
Sep 20, 2018
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
kubevirt-bot
added
the
lifecycle/stale
label
Sep 20, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
@davidvossel is this still a thing?
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
yeah, it's definitely still a thing. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
/remove-lifecycle stale |
davidvossel commentedDec 13, 2017
Because of #613 the VM pod's are no longer trusted. User's need podExec access in order to access the vm's console.
This means all local shared data between VM pods in /var/lib/kubevirt needs to change to a pattern where VM pods can only see data directly pertaining to them.