Skip to content
This repository has been archived by the owner on Apr 25, 2023. It is now read-only.

Support NodePort for Type MultiClusterServiceDNSRecord #370

Closed
danehans opened this issue Oct 27, 2018 · 11 comments
Closed

Support NodePort for Type MultiClusterServiceDNSRecord #370

danehans opened this issue Oct 27, 2018 · 11 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@danehans
Copy link
Contributor

Currently, services must be of type LoadBalancer to populate the MultiClusterServiceDNSRecord status.dns.loadbalancer field. Exposing a Kubernetes service of type NodePort is common and should be supported by type MultiClusterServiceDNSRecord.

@gyliu513
Copy link
Contributor

@danehans I think one of the concerns for NodePort is security, any solutions to handle the security issues when using NodePort?

@shashidharatd
Copy link
Contributor

NodePort poses some restriction to the DNS based discovery mechanism.

  1. The NodePort allocated by the service-controller would not be the same across all member clusters for the service. (workaround is to user specify NodePort in spec, but this is too restrictive and there is potential for port clash and is better avoided).
  2. The DNS mechanism we currently use does not take care of port and so we don't have mechanism to discover the port.

Supporting MultiClusterServiceDNSRecord for NodePort is therefore not a production worthy feature. It can at the most serve for some demo purposes.

@danehans
Copy link
Contributor Author

@gyliu513 I am not following your question. NoePort is used by gke and many other k8s implementations to externally expose services.

@danehans
Copy link
Contributor Author

@shashidharatd thanks for your feeback. I should clarify my question. I do not care about the TCP/UDP port used by type NodePort as it relates to the MultiClusterServiceDNSRecord. With type NodePort, I care about the IP address used for populating the status ingress IP of MultiClusterServiceDNSRecord. As we do with type LoadBalancer, we should be able to obtain the node(es) IP to use for populating status.ingress of MultiClusterServiceDNSRecord. This will help with test/dev (i.e. minikube) and for on-prem k8s deployments where NodePort is more common. MultiClusterServiceDNSRecord could potentially support NodePort by:

  1. Getting the label selector from the service.
  2. Create a list of IPs for the Service by getting all pods with a label matching the label selector from 1 and filtering .items[0].status.hostIP for each pod.
  3. Use the IP's from 2 to populate MultiClusterServiceDNSRecord status.ingress.ip.

Thoughts?

@gyliu513
Copy link
Contributor

@danehans The reason that I say NodePort is not security is because if you are using NodePort, then the service can be accessed from any node as follows:

  1. Create a service and use NodePort.
  2. After the service created, normal user will get the host IP address and one NodePort which in 31000 - 32000
  3. Normal user can use tool to access this node IP with other port which in 31000 - 32000 as well, it is not safe.

@danehans
Copy link
Contributor Author

@gyliu513 understood. My argument is that NodePort is a supported service type for Kubernetes and is common in the on-prem deployments I see. It is also used by minikube, so I believe there is benefits to supporting this service type.

@gyliu513
Copy link
Contributor

Thanks @danehans , +1 to have this, we can use NodePort at least for some demo purpose as mentioned by @shashidharatd :)

@marun marun added kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Apr 10, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 9, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 8, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

6 participants