Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Firewall Controller #86

Closed
andrewsykim opened this issue Apr 16, 2018 · 15 comments · Fixed by #332
Closed

Firewall Controller #86

andrewsykim opened this issue Apr 16, 2018 · 15 comments · Fixed by #332
Assignees

Comments

@andrewsykim
Copy link
Contributor

DigitalOcean CCM should watch for node events and update firewall rules accordingly.

related:
kubernetes/kops#4999
#68

@timoreimann
Copy link
Collaborator

timoreimann commented Oct 29, 2018

@andrewsykim could you elaborate on what the desired behavior should look like? For instance, do we intend to set up a deny all ingress policy for every node that comes up? Or are there specific events that guide how firewall rules should be provisioned?

Thanks!

@Richard87
Copy link

My feeling is that it should open all NodePorts automatically since they are designed to be available from the outside of the cluster?

@timoreimann
Copy link
Collaborator

timoreimann commented Oct 30, 2018

@Richard87 AFAIK, NodePorts on GKE are not publicly exposed by default to enable a distinction between providing local access (within your project / environment) and true public access. Not sure though how that notion maps to DigitalOcean.

@andrewsykim
Copy link
Contributor Author

yeah, so one behaviour of firewall controller would be to turn on access to those node ports if a droplet is added to a LB, etc. This was more desired back when private networking was more like "shared" networking. Today with private netwokring being completely isolated per user perhaps this may not be needed. What do you think?

@timoreimann
Copy link
Collaborator

Sorry, this one dropped off my radar. Trying to catch up now...

Do all droplets get assigned a publicly routable IP address? If so, wouldn't that make any process running on a Kubernetes cluster and listening on all devices (e.g., a pod with a host port or a native process) be generally accessible from the outside?

That's the only scenario I can see where you may still want to have a firewall to make sure nothing gets accidentally exposed. I suppose users could still manage firewalls on their own, but an integration in DO's CCM might be more convenient?

WDYT?

@andrewsykim
Copy link
Contributor Author

Hmm, good point. For public IPs, I guess firewalls is the only option here. Having DO CCM manage this makes sense to me.

@andrewsykim
Copy link
Contributor Author

Should be easy enough to create a custom controller for this, similar to what we've done for #142

@timoreimann
Copy link
Collaborator

timoreimann commented Nov 27, 2018

@andrewsykim 👍 a few further design questions come to my mind:

  1. should we enable or disable the firewall by default? My guess is we can't just enable it as that would be a breaking change from the existing behavior. Or would we still be fine with it since CCM hasn't reached 1.x yet?
  2. how do we allow users to choose a specific default firewall setting (especially the inverse to what we decide for in 1)? Maybe a new CLI parameter to CCM, e.g., --firewall=off?
  3. how would users toggle the firewall settings for individual ports? I suppose they could go through the usual DO firewall API given that CCM doesn't try to reconcile changes again.

Thanks!

@andrewsykim
Copy link
Contributor Author

  1. I think disable by default is a good starting point
  2. Flag on CCM is best I think.
  3. I think CCM should reconcile state changes though? I'm not sure if we want to allow users to toggle individual ports. If anything it should be cluster wide setting or at the very least a per Service setting (via annotations).

^ not strongly held opinions so happy to discuss alternatives.

@timoreimann
Copy link
Collaborator

@andrewsykim my thinking regarding 3. was that certain users may not want to use a LoadBalancer-typed service for one reason or another (minimizing cost, more sophisticated routing mechanisms, etc.) and instead prefer to talk to NodePorts directly. Not sure how realistic that really is, however, so I'm totally okay with taking the reconciliation route and adjusting only when a use case arises. 👌

@timoreimann
Copy link
Collaborator

Unless someone wants it more badly than me, I'd take a stab on this one next. :)

@andrewsykim
Copy link
Contributor Author

@timoreimann please go ahead! Just note that we will likely have it disabled by default for quite some time to battle test it and make sure it won't break anyone's production clusters :)

@timoreimann
Copy link
Collaborator

@andrewsykim makes total sense -- will make sure the default is off. 👍

@timoreimann
Copy link
Collaborator

There's also the open PR #70 which ensures that existing firewalls will be extended / reduced once a LoadBalancer-typed service is created / destroyed. I believe we need to have this as well eventually.

@timoreimann
Copy link
Collaborator

It took a bit, but we now have #332 open to manage a dedicated firewall for dynamic NodePort access. LBs are not taken into account yet for reasons for scoping and re-questioning the actual need. See #68 (comment) for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants