Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extract namespace from network annotation when checking for kubevirt related pod instance #141

Closed
wants to merge 1 commit into from

Conversation

RamLavi
Copy link
Member

@RamLavi RamLavi commented May 11, 2020

Signed-off-by: Ram Lavi ralavi@redhat.com

What this PR does / why we need it:
short background:
when running a vm (virtctl start <vm-nname>) virt-controller creates the virt-launcher pod.
Now, Although kubemacpool allocates mac to pods, it should disregard this pod (because the mac was already assigned to the vm). For this reason, kubmacpool uses isRelatedToKubevirt() to check if it needs to treat this pod as standalone or ignore it.
Specifically, isRelatedToKubevirt() does this by checking if the vm self link is valid.

Currently, kubemacpool uses the pod's own namespace in order to construct this link.
This doesn't work - because from some reason the pod isn't received with any namespace assigned to it. This causes the pod to be treated as a standalone pod, and thus rejected.

In this PR we switch to taking the namespace from the pod's annotation, that consist the network's namespace.

Special notes for your reviewer:
This PR fixes #140.

Release note:

NONE

@RamLavi
Copy link
Member Author

RamLavi commented May 11, 2020

/hold
this PR should not be merged until it is fully understood why the pod request does not have the vm namespace ( bug/design change?).

pkg/pool-manager/pod_pool.go Outdated Show resolved Hide resolved
@@ -323,14 +323,14 @@ func (p *PoolManager) IsKubevirtEnabled() bool {
return p.isKubevirt
}

func (p *PoolManager) isRelatedToKubevirt(pod *corev1.Pod) bool {
func (p *PoolManager) isRelatedToKubevirt(pod *corev1.Pod, kubevirtObjectNamespace string) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is no longer kubevirtObject right ? since we take it from Networks, I would say to name this namespace so we hidde where is comming from.

@@ -348,6 +358,18 @@ func (p *PoolManager) isRelatedToKubevirt(pod *corev1.Pod) bool {
return false
}

//making sure that all networks have the same namespace, otherwise returning error.
func (p *PoolManager) selectNamespaceFromNetworks(networks []*multus.NetworkSelectionElement) (string, bool) {
Copy link
Member

@qinqon qinqon May 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be a receiver free function we don't use PoolManager state

@qinqon
Copy link
Member

qinqon commented May 11, 2020

/lgtm
/approve

Copy link
Member

@phoracek phoracek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue has to be explained in the commit

pkg/pool-manager/pod_pool.go Show resolved Hide resolved
// we need to receive the namespace from the network annotation
namespace, ok := selectNamespaceFromNetworks(networks)
if !ok {
log.V(1).Info("this pod has more than 1 namespace in its secondary networks, so will be considered as a regular pod")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work? Any pod or VM can request networks from any number of namespaces.

@@ -323,14 +324,23 @@ func (p *PoolManager) IsKubevirtEnabled() bool {
return p.isKubevirt
}

func (p *PoolManager) isRelatedToKubevirt(pod *corev1.Pod) bool {
func (p *PoolManager) isRelatedToKubevirt(pod *corev1.Pod, networks []*multus.NetworkSelectionElement) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Network namespaces are in no way related kubevirt. If you want to check if a Pod is owned by KubeVirt, you should check its labels/annotations/owner. I'm sure one of those will provide you what you seek, and reliably.

@qinqon
Copy link
Member

qinqon commented May 11, 2020

/approved cancel

@qinqon
Copy link
Member

qinqon commented May 11, 2020

/approve cancel

@kubevirt-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
To complete the pull request process, please assign phoracek
You can assign the PR to them by writing /assign @phoracek in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

…elated pod instance

when running a vm, a virt-launcher pod is spawned. This pod should be ignored by kubemacpool,
since the mac was already allocated during the creation of the vm. This is implemented in isRelatedToKubevirt().

Specifically, it does this by checking there is a multus netwrok annotation, and that the vm self link is valid. The link required the vm's namespace.
Currently, kubemacpool uses the pod's own namespace in order to construct this link.
This doesn't work - because from some reason the pod isn't received with any namespace assigned to it.
Hence, the pod to be treated as a standalone pod, and thus rejected.

In this commit, we switch to taking the namespace from the pod's annotation, that consist the network's namespace.
This is introduced with selectNamespaceFromNetworks(), which goes over the pod's multus.NetworkSelectionElement annotation (if exists),
and extracts the namespace form there. We assume that in this case there could only be one namespace for all the multus networks.

Signed-off-by: Ram Lavi <ralavi@redhat.com>
@kubevirt-bot
Copy link
Collaborator

@RamLavi: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubemacpool-e2e-k8s d8098ed link /test pull-kubemacpool-e2e-k8s

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@RamLavi
Copy link
Member Author

RamLavi commented May 12, 2020

The proposed solution is not good enough for us. closing issue

@RamLavi RamLavi closed this May 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

vmi start fails to start when both pods and vms label is opted in
4 participants