-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Block changing nonMasqueradeCIDR #1738
Comments
For some additional context here, I discovered that calico is the problem when changing this CIDR in our clusters. It should be possible to change it and do a rolling-update to have all pods come up cleanly on the new network, but for some reason, even if you re-run Calico's |
Yeah, that's expected. You can still delete the old one, but it's an extra
A rolling-update of all Pods in the cluster will fix this so long as it's done after adding the new IP Pool and deleting the old one. It seems reasonable to block changing this on a live cluster. It's going to require re-configuring a number of components and restarting lots of pods, so it's a pretty disruptive operation. |
Yeah, I got it to work, doing as you said but it was definitely not a simple/clean process :) |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/lifecycle frozen |
@blakebarnett I know this is very old and so a long shot, but do you perhaps have the steps you went through to make the CIDR change? I'm in the same spot and was wondering how I can make it possible. |
My memory of it is pretty fuzzy, but I'm pretty sure after doing the change and then removing the old CIDR from the calico configuration, we just did a forced update of all the nodes and things came back online. |
@blakebarnett Thank you! will give it a go and hope for the best :D. Thanks again |
As a data point, I had to do this because the default
At this point you're going to have a cluster that's going to start acting wonky. Press forward.
After all of that, your nodes will join your cluster and everything should start working again. One thing that's interesting to note is that despite services pointing at a stale cluster CIDR, they will still work because |
It does not end well, because service IPs are out-of-range and cannot be changed.
We should either come up with a way to rejig the service IPs, or just prohibit this validation entirely.
The text was updated successfully, but these errors were encountered: