Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Public networking should be optional #85

Open
klausenbusk opened this issue Apr 14, 2018 · 12 comments
Open

Public networking should be optional #85

klausenbusk opened this issue Apr 14, 2018 · 12 comments

Comments

@klausenbusk
Copy link
Contributor

Follow up on #75 (comment). I do not see any particular reason why we require "private networking", it should be optional.

cc @andrewsykim

@klausenbusk klausenbusk changed the title Private ip should be optional Private networking should be optional Apr 14, 2018
@cagedmantis
Copy link
Contributor

@klausenbusk Can you give us a use case for when you would not want to use private networking?

@klausenbusk
Copy link
Contributor Author

@klausenbusk Can you give us a use case for when you would not want to use private networking?

The idea is distribute the master nodes across different datacenters for redundancy (ex: FRA1, AMS3, LON1), the worker nodes run in the same datacenter (for latency reason) (ex: FRA1). This make it possible for me to recover from a datacenter outage, pretty quickly by just adding worker nodes in another datacenter.

@aybabtme
Copy link

Makes sense to me.

@lxfontes
Copy link

worth mentioning split brain is more likely to happen in this scenario ( masters in many regions ).

  • different network paths from workers to each master
  • latency between masters ( etcd consensus )

Still think it makes sense to make private networking optional ( in fact, keep private networking as default and make 'use public network' an option )

@klausenbusk
Copy link
Contributor Author

worth mentioning split brain is more likely to happen in this scenario ( masters in many regions ).

A split-brain situation isn't possible with ETCD, either the master is healthy or it isn't.

different network paths from workers to each master

I'm not sure how big a concern that is these days.

@peterver
Copy link
Contributor

peterver commented Aug 23, 2018

@klausenbusk what about having a multi-datacenter federated kubernetes cluster ? https://kubernetes.io/docs/tasks/federation/. Wouldn't the masters need to communicate over the public internet with the federation server ( unless you setup some form of ssh tunnel between clusters ).

Then again not like that will be deployed or needed to be taken care of in the CCM.

@klausenbusk
Copy link
Contributor Author

@klausenbusk what about having a multi-datacenter federated kubernetes cluster ? https://kubernetes.io/docs/tasks/federation/.

The overhead of using Federation is too big for small cluster (IMHO).

Wouldn't the masters need to communicate over the public internet with the federation server ( unless you setup some form of ssh tunnel between clusters ).

The connection is encrypted, so why is this a concern?

@andrewsykim
Copy link
Contributor

andrewsykim commented Aug 23, 2018

Private networking on droplets is on a VPC by default now (see https://www.digitalocean.com/docs/release-notes/2018/private-networking/). I can't think of a reason why you would not run a droplet on a private network to at least isolate L4 proxy traffic. Though private networking is not strictly required, I think we should move forward here assuming that it will be. Will let @lxfontes have the final say.

@klausenbusk
Copy link
Contributor Author

I can't think of a reason why you would not run a droplet on a private network to at least isolate L4 proxy traffic

It is way easier to recover from a datacenter outage, if the masters is spread out across multiple datacenters, but that is probably a corner-case?
BTW: Can't I achieve the same kind of VPC over the public link by using Cloud Firewalls? According to https://blog.digitalocean.com/whats-new-with-the-digitalocean-network/, all traffic is traveling over DO controlled links.

@andrewsykim
Copy link
Contributor

Yeah for public addresses you probably do want firewalls. What I meant was that there are very few downsides of enabling private network (especially with the new default VPC isolation). Even in your case where your masters strictly talk over a public address, having private network enabled would have no consequence and would allow for future k8s nodes added in that DC to talk over that private network.

@peterver
Copy link
Contributor

peterver commented Aug 24, 2018

It is way easier to recover from a datacenter outage, if the masters is spread out across multiple datacenters, but that is probably a corner-case?

@klausenbusk @andrewsykim Which is exactly what happened when last weekend FRA-1 networking went down for several hours. Private networking issue FRA1

Our team wasn't effected because we run multi-datacenter and with cloudflare monitors on top to steer traffic with geo-ip load balancing, but I don't expect other teams to have as complex a stack as we do ?

@lxfontes
Copy link

I'm 💯 for allowing communication over public networks. However, I disagree with making it the default.

Why? It's an advanced setup.

  • cross-region clusters are not the norm ( and within same region, they should use private net )
  • private network is isolated by default, public is not and users should firewall it ( default to secure )
  • failure mode change ( as mentioned before )

We gonna carve out time to work on this, likely mid september or as part of hacktoberfest 🤘

@lxfontes lxfontes changed the title Private networking should be optional Public networking should be optional Aug 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants