New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod created from on-disk manifest not in API #14992
Comments
FWIW I think this issue might not be hit if #14938 was fixed. If the APIServer eventually gets informed of the pod creation, we'll keep retrying until that time. The linked bug prevents the failed pod from being retried. (There may still be a pathological case if the APIServer update takes a long time, but I think the mainline case should be fixed). |
@yujuhong if the net plugin fails, will a mirror pod be created? |
No, syncPod will return upon error. |
Static pods (pods created from on-disk manifest) is designed to work regardless of apiserver's availability. If I understand correctly, with this network plugin, kubelet would not be able to run static pods when apiserver is unreachable? What information do you need from the pod object? I'd like to make sure that mirror pods actually contain the information you want. |
That's correct, with the way the network plugin interface is currently designed, our plugin needs to hit the apiserver before any pods can be started. We use the Label, Annotation, Name, and Namespace metadata. We also use the Spec.Containers.Ports field in some cases. In general, I think that plugins could require any pod state, but if there is some piece of the Spec that will be hard to set in this case, we should discuss raise it with the SIG-network to see if anyone else will be inconvenienced at this point. |
In this case the user/admin has installed a network plugin that needs to The interesting part is how can we close the loop when the apiserver We could make the pod spec available to the plugin from Kubelet directly - Would it be unreasonable to create a mirror pod before instantiating the On Tue, Oct 6, 2015 at 9:54 AM, Paul Tiplady notifications@github.com
|
Yes, I was going to suggest that we create the mirror pod first as the solution. We'll ignore the creation error (if there's any) and proceed to the rest of sync pod. This with #14938 should fix the problem. |
I'm not suggesting this for the near term, there are probably easier ways to fix this now.
This is going to cause confusion. Take the example of flannel, it has a daemon half which allocates subnets and a plugin half, that does plugin stuff. The daemon half runs off and makes decisions out of band, which causes problems.
I had a related pr: #13877
At this point, we are isolated from what different "flannel servers in privileged pods" need. If that pod needs to read file manifests and serve them up, it can. If the network plugin needs arbitrarily more information than it's given, it would request it's server not the kubelet or the apiserver. I like this because it really streamlines network plugins. I can start the kubelet with --plugin=calico, and it would just pull and run the server with the right settings (like my pr does with flannel). |
Maybe I am missing your point - if the network plugin needs to know some On Tue, Oct 6, 2015 at 10:28 AM, Prashanth B notifications@github.com
|
+ Fix kubernetes#14992 + "When deploying a pod using an on-disk kubelet manifest (a la /etc/kubernetes/manifests), it appears that the network plugin setUpPod is notified of the new pod before the apiserver."
…etes#14992 + "When deploying a pod using an on-disk kubelet manifest (a la /etc/kubernetes/manifests), it appears that the network plugin setUpPod is notified of the new pod before the apiserver."
+ Fix kubernetes#14992 + "When deploying a pod using an on-disk kubelet manifest (a la /etc/kubernetes/manifests), it appears that the network plugin setUpPod is notified of the new pod before the apiserver."
…etes#14992 + "When deploying a pod using an on-disk kubelet manifest (a la /etc/kubernetes/manifests), it appears that the network plugin setUpPod is notified of the new pod before the apiserver."
…etes#14992 + "When deploying a pod using an on-disk kubelet manifest (a la /etc/kubernetes/manifests), it appears that the network plugin setUpPod is notified of the new pod before the apiserver."
When deploying a pod using an on-disk kubelet manifest (a la /etc/kubernetes/manifests), it appears that the network plugin
setUpPod
is notified of the new pod before the apiserver.The network plugin API passes limited information about the pod, with the expectation the the network plugin will look up the pod object in the API if needed. However, in the case above, the apiserver has not been informed of the existence of this pod at the time the network plugin
setUpPod
is called, and as such the network plugin is unable to get the information it needs from the apiserver.The text was updated successfully, but these errors were encountered: