You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue describes the virtual kubelet lifecycle and the pod lifecycle. The last section ends up with a bullet list of the known problems related to the remote pod status reconciliation that can be updated and should be solved in the next PRs.
Virtual kubelet lifecycle
At boot time, the virtual kubelet fetches from the etcd a CR of kind namespacenattingtable (or creates it if it doesn't exist) that contains the natting table of the namespaces for the given virtual node, i.e., the translation between local namespaces and remote namespaces. Every time a new entry is added in this natting table, a new reflection routine for that namespace is triggered; this routine implies:
the remote reflection of many different resources, among which:
service
endpoints
configmap
secret
the remote pod-watcher for the translated remote namespace
Resource reflection
The reflection of the resource implies that each local resource is translated (if needed) and reflected remotely, such that a pod in the remote namespace has a complete view of the local namespace resources as if it was local.
Remote pod-watcher
The remote pod-watcher is a routine that listens for all the events related to a remotely offloaded pod in a given translated namespace; this is needed to reconcile the remote status with the local one, such that the local cluster always knows in which state each offloaded pod is. There are some remote status transitions that trigger the providerFailed status in the local pod instance: providerFailed means that the local status cannot be correctly updated because of an unrecognized remote status transition. We need to deeper investigate for understanding when and why this status is triggered and to avoid it as much as possible.
The currently known reasons that trigger this status are:
deletion of an offloaded pod from the remote cluster
The text was updated successfully, but these errors were encountered:
mlavacca
changed the title
[Epic] Handle Pod Lifecycle
[Epic] Virtual Kubelet
Jun 3, 2020
palexster
changed the title
[Epic] Virtual Kubelet
[Doc] Virtual Kubelet
Jun 4, 2020
Virtual kubelet
This issue describes the virtual kubelet lifecycle and the pod lifecycle. The last section ends up with a bullet list of the known problems related to the remote pod status reconciliation that can be updated and should be solved in the next PRs.
Virtual kubelet lifecycle
At boot time, the virtual kubelet fetches from the etcd a CR of kind
namespacenattingtable
(or creates it if it doesn't exist) that contains the natting table of the namespaces for the given virtual node, i.e., the translation between local namespaces and remote namespaces. Every time a new entry is added in this natting table, a new reflection routine for that namespace is triggered; this routine implies:service
endpoints
configmap
secret
Resource reflection
The reflection of the resource implies that each local resource is translated (if needed) and reflected remotely, such that a pod in the remote namespace has a complete view of the local namespace resources as if it was local.
Remote pod-watcher
The remote pod-watcher is a routine that listens for all the events related to a remotely offloaded pod in a given translated namespace; this is needed to reconcile the remote status with the local one, such that the local cluster always knows in which state each offloaded pod is. There are some remote status transitions that trigger the
providerFailed
status in the local pod instance:providerFailed
means that the local status cannot be correctly updated because of an unrecognized remote status transition. We need to deeper investigate for understanding when and why this status is triggered and to avoid it as much as possible.The currently known reasons that trigger this status are:
The text was updated successfully, but these errors were encountered: