New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PodCIDR not set on the master after moving to a different EC2 instance #5437
Comments
The fact that the master's API DNS entry wasn't updated in Route53 seems to be a dup of #5289 but there is still the other issue (which may or may not be related) of |
So I figured it out. There were two issues:
Since the first problem is already covered by #5289, I'm making this issue only about problem 2.
To fix this I had to
H/T kubernetes/kubernetes#32900 for putting me on the right track. Now the question is why did kops not set this properly on the new master node? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Had exactly the same issue. did anyone found the rootcause? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@antoninbeaufort: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
kops
version are you running?Version 1.10.0-alpha.1
AWS
In the above the IP
1.2.3.4
is the public IP of the old EC2 instance where the old master was running, which had just been terminated by kops.I had to go to Route53 and update the A record in the zone for k8s.example.com to have it point to the new public IP of the EC2 instance where the new master was. Shortly after updating the A record, I could finally see:
Validation should work, kops should've updated the A record in Route53.
The master still didn't come up successfully. I posted logs for kubelet, api-server, and controller-manager here: https://gist.github.com/tsuna/594fef65be39ecd7e0ffe05bf8113998
Of interest is
Unable to update cni config: No networks found in /etc/cni/net.d/
(the directory is indeed empty), which I think led to a bunch ofJul 12 22:53:38 ip-172-x-y-z kubelet[1635]: E0712 22:53:38.225953 1635 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR
and other connection errors trying to get to the api-server.The text was updated successfully, but these errors were encountered: