-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
virt-handler shared informers #280
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
d105c77
to
2733cef
Compare
retest this please |
Hoping to avoid code rot here. Anyone feel comfortable giving this PR an ack? |
actually, lets get the "non-default namespace" patch in before this. I'll have to merge with those changes. |
Signed-off-by: David Vossel <dvossel@redhat.com>
2733cef
to
b930372
Compare
@rmohr I updated this to work with non-default namespaces. should be good to go now. |
pkg/informers/virtinformers.go
Outdated
panic(err) | ||
} | ||
|
||
lw := kubecli.NewListWatchFromClient(f.restClient, "vms", kubeapi.NamespaceDefault, fields.Everything(), labelSelector) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering if this shouldn't be a api.NamespaceAll?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Will try it out before merging.
@@ -70,7 +70,7 @@ var CrashedReasonTranslationMap = map[libvirt.DomainCrashedReason]api.StateChang | |||
} | |||
|
|||
// NewListWatchFromClient creates a new ListWatch from the specified client, resource, namespace and field selector. | |||
func newListWatchFromClient(c virtwrap.Connection, events ...int) *cache.ListWatch { | |||
func NewListWatchFromClient(c virtwrap.Connection, events ...int) *cache.ListWatch { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that we were taking care about exports so far, but are you deliberately exporting that function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, it's used to create the shared informer somewhere else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not really sure, if this PR is a good thing. We are moving virt-handler and libvirt related code to the same place, where cluster related informer code is constructed, which can be shared across the whole cluster. That kind of obfuscates, that the domain informer, is something node and virt-handler specific ...
@davidvossel what do you think?
// Create all the informers used by virt-handler here | ||
informerFactory := kubeinformers.NewKubeInformerFactory(restClient, clientSet) | ||
vmOnHostInformer := informerFactory.VmOnHost(*host) | ||
domainSharedInformer := informerFactory.CustomInformer("domainInformer", func() cache.SharedIndexInformer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, when thinking about this, I don't really see a reason, to move this to a cluster wide factory. It is purely virt-handler and libvirt related code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The domain informer is specific to virt-handler, that's why I created the "CustomInformer" function. It lets the InformerFactory manage shared informers that should not be exposed cluster wide.
The CustomInformer function keeps the domain informer from being something that is shared cluster wide. This keeps something like 'virt-controller' from having to pull in any libvirt related client libraries. |
closing for now. I don't feel super strongly about this patch. It isn't that valuable of a change. |
…ls. (kubevirt#280) The target pod looks for a pod with a specific label (specified in the pod affinity) that matches the source pod label. In my case the target pod included this label as well, so we can see that the target pod found matching pod, but it is the WRONG pod. It's itself!! The target was running without finding the source pod first. If we remove this label from the target pod, it will find the source pod and then will be scheduled on the same node. If it does not find the source pod (because the scheduler tried to schedule it before the source pod), it will be in 'Pending' state until the source pod is scheduled, and then will be running on the same node. kubevirt/containerized-data-importer#279
pkg/ip: Return correct error if container name provided exists, and test cases
We shouldn't mess with this setting implicitly from kubevirtci for several resaons: 1. It requires the user to run the tool with root permissions. 2. It changes the behaviour of the system permanently. Since this is not a strict requirement to make kubevirtci work, we can drop this code. Signed-off-by: Daniel Belenky <dbelenky@redhat.com>
migrate virt-handler to use the same informer patterns as virt-controller.