Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] Virtual Kubelet #3

Closed
palexster opened this issue Jun 3, 2020 · 1 comment
Closed

[Doc] Virtual Kubelet #3

palexster opened this issue Jun 3, 2020 · 1 comment

Comments

@palexster
Copy link
Member

palexster commented Jun 3, 2020

Virtual kubelet

This issue describes the virtual kubelet lifecycle and the pod lifecycle. The last section ends up with a bullet list of the known problems related to the remote pod status reconciliation that can be updated and should be solved in the next PRs.

Virtual kubelet lifecycle

At boot time, the virtual kubelet fetches from the etcd a CR of kind namespacenattingtable (or creates it if it doesn't exist) that contains the natting table of the namespaces for the given virtual node, i.e., the translation between local namespaces and remote namespaces. Every time a new entry is added in this natting table, a new reflection routine for that namespace is triggered; this routine implies:

  • the remote reflection of many different resources, among which:
    • service
    • endpoints
    • configmap
    • secret
  • the remote pod-watcher for the translated remote namespace

Resource reflection

The reflection of the resource implies that each local resource is translated (if needed) and reflected remotely, such that a pod in the remote namespace has a complete view of the local namespace resources as if it was local.

Remote pod-watcher

The remote pod-watcher is a routine that listens for all the events related to a remotely offloaded pod in a given translated namespace; this is needed to reconcile the remote status with the local one, such that the local cluster always knows in which state each offloaded pod is. There are some remote status transitions that trigger the providerFailed status in the local pod instance: providerFailed means that the local status cannot be correctly updated because of an unrecognized remote status transition. We need to deeper investigate for understanding when and why this status is triggered and to avoid it as much as possible.
The currently known reasons that trigger this status are:

  • deletion of an offloaded pod from the remote cluster
@mlavacca mlavacca changed the title [Epic] Handle Pod Lifecycle [Epic] Virtual Kubelet Jun 3, 2020
@palexster palexster changed the title [Epic] Virtual Kubelet [Doc] Virtual Kubelet Jun 4, 2020
@palexster palexster mentioned this issue Jun 4, 2020
@palexster
Copy link
Member Author

Addressed by #4. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant