Fix upgrade failure when leader is attached to k8s-master #44
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes https://bugs.launchpad.net/charm-canal/+bug/1844605
I tested this with two deployments. For the first one, I did a direct deployment of the canonical-kubernetes-canal bundle from edge using a locally built canal, and ensured that the canal leader was attached to a kubernetes-worker unit. I verified that all calico services, and in particular the calico policy controller, were running.
For the second one, I tried to mimick the bug's failure conditions by deploying canonical-kubernetes-canal-484, ensuring that the canal leader was attached to a kubernetes-master unit, and then upgrading canal. I verified that all calico services upgraded correctly and that I could create new pods after the upgrade.