Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

0.12 release: remove legacy networking (less is more) #1451

Merged

Conversation

davidmccormick
Copy link
Contributor

@davidmccormick davidmccormick commented Sep 26, 2018

The 0.11 release completed the migration from legacy flannel and calico networking to the newer Self Hosting 'canal' or 'flannel' networking daemonsets.

This change is clean up all of the old code for that supports the legacy flannel and calico installation and which enabled the migration from legacy to the network daemonsets and makes self-hosting always enabled. The worker nodes no longer have to have any knowledge/access to the etcd servers as SelfHosting uses the kubernetes api for storage.

This simplifies kube-aws, which is always nice! :)

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 26, 2018
@davidmccormick
Copy link
Contributor Author

I've run a couple of manual tests - 1st upgrading my 0.10.x cluster to this release with canal networking and then an update where I change the networking to 'flannel' and both worked successfully. This change requires that the legacy 'useCalico' setting be removed from cluster.yaml.

@codecov-io
Copy link

codecov-io commented Sep 26, 2018

Codecov Report

Merging #1451 into master will decrease coverage by 0.13%.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1451      +/-   ##
==========================================
- Coverage   38.26%   38.13%   -0.14%     
==========================================
  Files          74       74              
  Lines        4565     4555      -10     
==========================================
- Hits         1747     1737      -10     
  Misses       2576     2576              
  Partials      242      242
Impacted Files Coverage Δ
core/nodepool/config/deployment.go 62.26% <ø> (-2.03%) ⬇️
core/controlplane/config/config.go 63.34% <0%> (-0.29%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 56c5ceb...7f7aca3. Read the comment docs.

if c.Kubernetes.Networking.SelfHosting.Typha && c.Kubernetes.Networking.SelfHosting.Type != "canal" {
return fmt.Errorf("networkingdaemonsets - you can only enable typha when deploying type 'canal'")
}
if (c.Kubernetes.Networking.SelfHosting.Type != "canal") && (c.Kubernetes.Networking.SelfHosting.Type != "flannel") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bit of a picky comment, but can we keep if statement style consistent.

if (c.Kubernetes.Networking.SelfHosting.Type != "canal") && (c.Kubernetes.Networking.SelfHosting.Type != "flannel"){ can be replaced by
if c.Kubernetes.Networking.SelfHosting.Type != "canal" && c.Kubernetes.Networking.SelfHosting.Type != "flannel" right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure - that makes sense, have updated!

networking:
selfHosting:
enabled: true # false will fall back to legacy coreos flannel/etcd2 installation
type: canal # either "canal" or "flannel"
typha: false # enable for type 'canal' for 50+ node clusters
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have any reasons not to enable typha by default btw?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typha is only needed on larger clusters with 50+ nodes.

Copy link
Contributor

@mumoshu mumoshu Sep 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would the purpose of avoiding it when unnecessary is to reduce num of moving parts?
I just wondered if it would be a good idea to enable it by default if it doesn't have any bad side-effect, so that users won't suffer from degraded performance after reaching 50+ nodes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little worried about trying to set it automatically because that would mean I would have to count the nodes in the nodepools and make allowances for possible cluster auto-scaling. With it off by default, a cluster admin with a growing cluster and seeing high loads on their apiservers will hopefully have had some time to start getting acquainted with the cluster.yaml file and settings.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Thanks a lot for clarifying!

Copy link
Contributor

@mumoshu mumoshu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I appreciate your continuous efforts ☺️

@mumoshu mumoshu merged commit da1b7ef into kubernetes-retired:master Sep 26, 2018
@mumoshu mumoshu added this to the v0.12.0 milestone Sep 26, 2018
@kylehodgetts kylehodgetts deleted the remove-legacy-networking branch September 27, 2018 13:32
kevtaylor pushed a commit to HotelsDotCom/kube-aws that referenced this pull request Jan 9, 2019
…1451)

The 0.11 release completed the migration from legacy flannel and calico networking to the newer Self Hosting 'canal' or 'flannel' networking daemonsets.

This change is clean up all of the **old code** for that supports the legacy flannel and calico installation and which enabled the migration from legacy to the network daemonsets and makes self-hosting always enabled.  The worker nodes no longer have to have any knowledge/access to the etcd servers as SelfHosting uses the kubernetes api for storage.

This simplifies kube-aws, which is always nice! :)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants