Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

virt-handler shared informers #280

Closed
wants to merge 2 commits into from

Conversation

davidvossel
Copy link
Member

migrate virt-handler to use the same informer patterns as virt-controller.

Copy link
Member

@stu-gott stu-gott left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@davidvossel
Copy link
Member Author

retest this please

@davidvossel
Copy link
Member Author

Hoping to avoid code rot here. Anyone feel comfortable giving this PR an ack?

@davidvossel
Copy link
Member Author

actually, lets get the "non-default namespace" patch in before this. I'll have to merge with those changes.

@davidvossel
Copy link
Member Author

@rmohr I updated this to work with non-default namespaces. should be good to go now.

panic(err)
}

lw := kubecli.NewListWatchFromClient(f.restClient, "vms", kubeapi.NamespaceDefault, fields.Everything(), labelSelector)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering if this shouldn't be a api.NamespaceAll?

Copy link
Member

@rmohr rmohr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Will try it out before merging.

@@ -70,7 +70,7 @@ var CrashedReasonTranslationMap = map[libvirt.DomainCrashedReason]api.StateChang
}

// NewListWatchFromClient creates a new ListWatch from the specified client, resource, namespace and field selector.
func newListWatchFromClient(c virtwrap.Connection, events ...int) *cache.ListWatch {
func NewListWatchFromClient(c virtwrap.Connection, events ...int) *cache.ListWatch {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that we were taking care about exports so far, but are you deliberately exporting that function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, it's used to create the shared informer somewhere else.

Copy link
Member

@rmohr rmohr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not really sure, if this PR is a good thing. We are moving virt-handler and libvirt related code to the same place, where cluster related informer code is constructed, which can be shared across the whole cluster. That kind of obfuscates, that the domain informer, is something node and virt-handler specific ...

@davidvossel what do you think?

// Create all the informers used by virt-handler here
informerFactory := kubeinformers.NewKubeInformerFactory(restClient, clientSet)
vmOnHostInformer := informerFactory.VmOnHost(*host)
domainSharedInformer := informerFactory.CustomInformer("domainInformer", func() cache.SharedIndexInformer {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, when thinking about this, I don't really see a reason, to move this to a cluster wide factory. It is purely virt-handler and libvirt related code.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The domain informer is specific to virt-handler, that's why I created the "CustomInformer" function. It lets the InformerFactory manage shared informers that should not be exposed cluster wide.

@davidvossel
Copy link
Member Author

I am not really sure, if this PR is a good thing. We are moving virt-handler and libvirt related code to the same place, where cluster related informer code is constructed, which can be shared across the whole cluster. That kind of obfuscates, that the domain informer, is something node and virt-handler specific ...

The CustomInformer function keeps the domain informer from being something that is shared cluster wide. This keeps something like 'virt-controller' from having to pull in any libvirt related client libraries.

@davidvossel
Copy link
Member Author

closing for now. I don't feel super strongly about this patch. It isn't that valuable of a change.

kubevirt-bot pushed a commit to kubevirt-bot/kubevirt that referenced this pull request Nov 6, 2020
…ls. (kubevirt#280)

The target pod looks for a pod with a specific label (specified in the pod affinity) that matches the source pod label.
In my case the target pod included this label as well, so we can see that the target pod found matching pod, but it is the WRONG pod. It's itself!!
The target was running without finding the source pod first.
If we remove this label from the target pod, it will find the source pod and then will be scheduled on the same node.
If it does not find the source pod (because the scheduler tried to schedule it before the source pod), it will be in 'Pending' state until the source pod is scheduled, and then will be running on the same node.
kubevirt/containerized-data-importer#279
mzzgaopeng pushed a commit to mzzgaopeng/kubevirt that referenced this pull request Mar 8, 2021
pkg/ip: Return correct error if container name provided exists, and test cases
kubevirt-bot pushed a commit to kubevirt-bot/kubevirt that referenced this pull request Dec 7, 2021
We shouldn't mess with this setting implicitly from kubevirtci for
several resaons:
1. It requires the user to run the tool with root permissions.
2. It changes the behaviour of the system permanently.

Since this is not a strict requirement to make kubevirtci work, we
can drop this code.

Signed-off-by: Daniel Belenky <dbelenky@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants