New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit shared mounts to specific VM (launcher) Pods - cleanup needed #615

Open
davidvossel opened this Issue Dec 13, 2017 · 7 comments

Comments

Projects
None yet
4 participants
@davidvossel
Member

davidvossel commented Dec 13, 2017

Because of #613 the VM pod's are no longer trusted. User's need podExec access in order to access the vm's console.

This means all local shared data between VM pods in /var/lib/kubevirt needs to change to a pattern where VM pods can only see data directly pertaining to them.

@fabiand

This comment has been minimized.

Show comment
Hide comment
@fabiand

fabiand Apr 4, 2018

Member

This one should be fixed after the subresource work, right?

Member

fabiand commented Apr 4, 2018

This one should be fixed after the subresource work, right?

@davidvossel

This comment has been minimized.

Show comment
Hide comment
@davidvossel

davidvossel Apr 4, 2018

Member

we still have the situation where sockets are shared between virt-launcher and virt-handler. This is required for communication between the two components.

However, with the addition of the subresource stream api for console and vnc, there's no reason for a user to need podExec permissions to use KubeVirt.

I'd prefer the VM pod to not need any host shared mounts at some point.

Member

davidvossel commented Apr 4, 2018

we still have the situation where sockets are shared between virt-launcher and virt-handler. This is required for communication between the two components.

However, with the addition of the subresource stream api for console and vnc, there's no reason for a user to need podExec permissions to use KubeVirt.

I'd prefer the VM pod to not need any host shared mounts at some point.

@fabiand

This comment has been minimized.

Show comment
Hide comment
@fabiand

fabiand Apr 4, 2018

Member

Thanks.

Yes, we still have host mounts, and I agree we should get rid of them. Keeping it open.

Member

fabiand commented Apr 4, 2018

Thanks.

Yes, we still have host mounts, and I agree we should get rid of them. Keeping it open.

@fabiand fabiand added this to the v1.2 milestone Apr 4, 2018

@fabiand fabiand changed the title from VM Pod shared mount security cleanup to Limit shared mounts to specific VM (launcher) Pods - cleanup needed May 28, 2018

@fabiand fabiand self-assigned this May 28, 2018

@kubevirt-bot

This comment has been minimized.

Show comment
Hide comment
@kubevirt-bot

kubevirt-bot Sep 20, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot commented Sep 20, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@fabiand

This comment has been minimized.

Show comment
Hide comment
@fabiand

fabiand Sep 20, 2018

Member
Member

fabiand commented Sep 20, 2018

@davidvossel

This comment has been minimized.

Show comment
Hide comment
@davidvossel

davidvossel Sep 20, 2018

Member

yeah, it's definitely still a thing.

Member

davidvossel commented Sep 20, 2018

yeah, it's definitely still a thing.

@rmohr

This comment has been minimized.

Show comment
Hide comment
@rmohr

rmohr Sep 20, 2018

Member

/remove-lifecycle stale

Member

rmohr commented Sep 20, 2018

/remove-lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment