Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS Private Endpoint Support #22

Closed
pauncejones opened this issue Dec 5, 2018 · 42 comments
Closed

EKS Private Endpoint Support #22

pauncejones opened this issue Dec 5, 2018 · 42 comments

Comments

@pauncejones
Copy link
Contributor

@pauncejones pauncejones commented Dec 5, 2018

Provide customers with private endpoint access to EKS.

@pauncejones pauncejones created this issue from a note in containers-roadmap (We're Working On It) Dec 5, 2018
@pauncejones pauncejones added the EKS label Dec 5, 2018
@uprightvinyl
Copy link

@uprightvinyl uprightvinyl commented Dec 11, 2018

Could you clarify if this for the EKS API or the Kubernetes API, or both? If its the Kubernetes API, that is great. Currently, having to reach out over the internet from worker nodes in private subnets to EKS masters is whats stopping us from using EKS.

@sriv1211
Copy link

@sriv1211 sriv1211 commented Dec 12, 2018

When you create a VPC endpoint, a record resolving to a VPC IP address is generated in the private DNS.
In principle, this should just mean that nodes inside the VPC will resolve to the VPC endpoint, and all internal traffic will be internal to the VPC, whilst external nodes will resolve to the public IP address?

If the system was designed this way then the internal and external endpoints wouldn't need to know about each other and can coexist. Is this not how all other VPC endpoints work?

@micahhausler
Copy link
Member

@micahhausler micahhausler commented Dec 17, 2018

This roadmap item is specifically for the Kubernetes cluster API endpoint, not the EKS API.

@uprightvinyl
Copy link

@uprightvinyl uprightvinyl commented Dec 17, 2018

This roadmap item is specifically for the Kubernetes cluster API endpoint, not the EKS API.

Great news, thats what we need. Thanks for the update.

@senyan
Copy link

@senyan senyan commented Jan 9, 2019

What is the estimated timeline for this? Seems this will be a major security improvements.

@NickLarsenNZ
Copy link

@NickLarsenNZ NickLarsenNZ commented Jan 16, 2019

this would be super helpful as we have to limit worker exposure to the internet for some clusters.

@kplimack
Copy link

@kplimack kplimack commented Jan 18, 2019

I cannot use EKS at all if the api is exposed to the internet, its a non-starter. This feature is a must for me

@sonykus
Copy link

@sonykus sonykus commented Jan 24, 2019

Same as above. To us, this is a complete showstopper.

@pwdebruin1
Copy link

@pwdebruin1 pwdebruin1 commented Jan 28, 2019

Doesn't this blog article already suggest it exists? I don't see a way to forward your PrivateLink NLB to the EKS API endpoint though? Or the actual endpoint being listed under AWS services as with ECS.

@kzwang
Copy link

@kzwang kzwang commented Feb 11, 2019

It seems this is partially supported? I saw there are 2 ENI created in my EKS VPC, if I modify the worker node manually (changing hosts file to point the EKS API Server DNS to the private IP of the ENI or change the kubeconfig file to use the private IP for the server address), it can connect to the master through private IP address successfully. Just the EKS master's DNS resolves to public IP addresses currently.

@micahhausler
Copy link
Member

@micahhausler micahhausler commented Feb 11, 2019

@pwdebruin1 that blog is incorrect, we'll get that corrected.

@kzwang the approach you suggest does not transit the internet, but those IPs are subject to change as API servers are upgraded or undergo routine change.

@tdmalone
Copy link

@tdmalone tdmalone commented Feb 13, 2019

This roadmap item is specifically for the Kubernetes cluster API endpoint, not the EKS API.

Just to clarify (k8s newbie here), docs state that:

Worker nodes also require outbound internet access to the Amazon EKS APIs for cluster introspection and node registration at launch time.

Do those docs actually mean the Kubernetes cluster API? I'm assuming worker nodes would not need to access the EKS API for any standard sort of deployment (just working out whether I want the roadmap item specified by this issue, or separately PrivateLink support for the EKS API as well).

@nrdlngr
Copy link

@nrdlngr nrdlngr commented Feb 13, 2019

https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh#L86

By default, the worker node bootstrap script calls the Amazon EKS API to get the API server endpoint and cluster's certificate authority data.

If you launch your node group by following the steps here, you can bypass this call by specifying the API server endpoint and cluster certificate authority data as BootstrapArguments when you launch the node group.

https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh#L20

@tabern tabern moved this from We're Working On It to Coming Soon in containers-roadmap Feb 15, 2019
@tabern tabern moved this from Coming Soon to Just Shipped in containers-roadmap Mar 20, 2019
@tabern
Copy link
Contributor

@tabern tabern commented Mar 20, 2019

This feature is now live and available for all EKS customers!

https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-eks-introduces-kubernetes-api-server-endpoint-access-cont/

@tabern tabern closed this Mar 20, 2019
@tabern tabern changed the title EKS PrivateLink Support EKS Private Endpoint Support Mar 20, 2019
@senyan
Copy link

@senyan senyan commented Mar 20, 2019

Perfect. right on time. Thank you guys.

@spasam
Copy link

@spasam spasam commented Mar 20, 2019

@tabern Thanks. Just tried this. Why is the private zone hidden? Ideally, we would have disabled public access, enabled private access and use Route53 resolvers to forward to the private zone from on-premise (over direct connect). Now we either have to setup bastion hosts in VPC to forward traffic or keep public access enable :(

@tabern
Copy link
Contributor

@tabern tabern commented Mar 20, 2019

@spasam have you tried with the direct connect yet? The private zone is not available in the Route53 console, but this does not mean you cannot use direct connect. You connect kubectl to the control plane using the provisioned endpoint in the same way regardless of the access control setting. The difference is that that connection will not work if you are outside of the VPC and do not have direct connect, transit gateway, etc enabled for your client to connect to the API server within the VPC. See info in our docs
https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html

@vincentheet
Copy link

@vincentheet vincentheet commented Mar 20, 2019

Great work, any ideas if CloudFormation support is coming to enable this feature via IaaC?

@eperdeme
Copy link

@eperdeme eperdeme commented Mar 20, 2019

Worth noting you also need to bring up EC2 API on private link for the CNI functions to work correctly and any links to containers such as ECR.

@spasam
Copy link

@spasam spasam commented Mar 20, 2019

@tabern thanks for your response. We do use Direct Connect. The problem is with DNS resolution when only private access is enabled. Because the hidden private zone is only attached to the VPC, EKS addresses like abc.us-east-1.eks.amazonaws.com can be resolved in VPC only (which is documented). We can reach the EKS API server using the private IP address but not using the DNS hostname. Ideally, if there was a way to resolve the private IP addresses using route 53 resolvers etc, we won't have to enable public access.

@antonosmond
Copy link

@antonosmond antonosmond commented Mar 20, 2019

We also have the same problem RE DNS resolution. We'd like to access the private endpoint from a VPN in a different VPC from the EKS cluster. So far I've tried with VPC peering (with DNS resolution & hostnames enabled in both VPCs as well as requester & accepter DNS resolution) and I've also tried using a transit gateway (with DNS support enabled). Interestingly, I can resolve a instance IP in the other VPC using the amazon provided instance DNS name but not the private endpoint DNS.

@johnjeffers
Copy link

@johnjeffers johnjeffers commented Mar 20, 2019

Adding another voice to the chorus. Same problem as @antonosmond
We really want to use private endpoints, but can't because the name isn't resolvable outside of the VPC. Extremely frustrating.

@chrisglencross
Copy link

@chrisglencross chrisglencross commented Mar 20, 2019

Agreed with the other comments that the DNS resolution doesn't seem ideal.

A not-so-nice workaround is to deploy a web proxy on an EC2 node in the VPC, then set the HTTPS_PROXY environment variable on your client before running kubectl. The proxy in the VPC performs the DNS resolution, so your client network doesn't access to Route 53.

@devkid
Copy link

@devkid devkid commented Mar 20, 2019

We are using custom DNS servers in our VPCs and therefor can't even use the private endpoint on our EC2 instances.

@Alien2150
Copy link

@Alien2150 Alien2150 commented Mar 20, 2019

Theoretically speaking: Would it not work to put an A record off the known ip (that can be retrieved within the cluster) on some private route 53 zone? I mean not a super nice solution but should work?

@chrisglencross
Copy link

@chrisglencross chrisglencross commented Mar 20, 2019

@devkid wrote:

We are using custom DNS servers in our VPCs and therefor can't even use the private endpoint on our EC2 instances.

So are we. The web proxy I used let me configure a custom host resolver plugin, and in the plugin I used a DNS client library to get the IP address from Route 53 instead of using the host's DNS client.

I only got this working with kubectl. I'm not sure whether the nodes themselves can be configured to connect to the API through a web proxy.

@devkid
Copy link

@devkid devkid commented Mar 20, 2019

Theoretically speaking: Would it not work to put an A record off the known ip (that can be retrieved within the cluster) on some private route 53 zone? I mean not a super nice solution but should work?

@Alien2150 because there is no guarantee whatsoever regarding the permanence of those IPs. (What if AWS decides to replace a faulty node of the control plane and the new instance gets a new IP?)

@tabern
Copy link
Contributor

@tabern tabern commented Mar 21, 2019

Thanks for the feedback, keep it coming. We're currently exploring options to allow DNS routing with private endpoints and will update here as we have more information.

@devkid
Copy link

@devkid devkid commented Mar 21, 2019

@tabern Feedback: In case you would provide multiple endpoints (one private, one public), it would be nice to have an option in aws eks update-config to select which endpoint to use. It would also be very nice to switch existing clusters with public endpoint to private endpoint while keeping the endpoint URL (we have some clusters now already in different accounts and regions and this would spare us from telling every user to run aws eks update-config again and from re-generating all kubeconfig files in Jenkins credentials store).

@zimmertr
Copy link

@zimmertr zimmertr commented Mar 21, 2019

If one were to disable public access what would be the ramifications?

  1. Would kubectl fail to interact with the API server from outside the VPC?
  2. Would services exposed via ELB still be accessible to the public world?
@ffjia
Copy link

@ffjia ffjia commented Mar 22, 2019

@devkid If I understand correctly, the API server endpoint DNS name is same for both public and private endpoint. It's just AWS assign private DNS record (pointing to internal Load Balancer?) to Cluster VPC if private endpoint enabled.

@MarcusNoble
Copy link

@MarcusNoble MarcusNoble commented Mar 22, 2019

If one were to disable public access what would be the ramifications?

  1. Would kubectl fail to interact with the API server from outside the VPC?
  2. Would services exposed via ELB still be accessible to the public world?

Yes to both

@pwdebruin1
Copy link

@pwdebruin1 pwdebruin1 commented Mar 22, 2019

Yeah common guys, great feature but not helpful if you run your EKS clusters in their own VPCs which is different from where you terminate your VPN connections.

@devkid
Copy link

@devkid devkid commented Mar 23, 2019

@devkid If I understand correctly, the API server endpoint DNS name is same for both public and private endpoint. It's just AWS assign private DNS record (pointing to internal Load Balancer?) to Cluster VPC if private endpoint enabled.

I'm aware of the current state, I made suggestions for changes.

@wilbur4321
Copy link

@wilbur4321 wilbur4321 commented Mar 24, 2019

We also have the same DNS problem. We have a split-brain DNS server on another VPC that we use to resolve both route53 and corporate addresses. If we could see the Routee 53 zone in the console, we could associate it with the other VPC to solve the resolution problem but, without that, this is a show stopper :(

@cdenneen
Copy link

@cdenneen cdenneen commented Mar 24, 2019

@wilbur4321 can you point your split brain DNS server to forward to VPC .2 address as conditional forward for the Route53 domain? So even though you can’t see it in R53 it makes resolvable to your Corp network?

@wilbur4321
Copy link

@wilbur4321 wilbur4321 commented Mar 26, 2019

@cdenneen our problem is the split-brain DNS server is in a different VPC (in a different account, no less), and we've got our subnet's DHCP pointing to it for everything there as well, so that our AWS systems/containers can talk back into corporate. Normally, this works great -- but it gives us the problem that EKS worker hosts won't be able to talk to the cluster because they can't resolve it's name.

@tabern
Copy link
Contributor

@tabern tabern commented Mar 26, 2019

Hi all, we're now tracking the feature request for DNS resolution to the private hosted zone to allow access to the private Kubernetes API server endpoint as a new item on the roadmap.

Please feel free to leave any feedback here: #221

@mycroftmih
Copy link

@mycroftmih mycroftmih commented Apr 11, 2019

I've tried using this feature and it broken the eks cluster. State shows: "Endpoint-access-update | FAILED" with "Errors (0)"
After this, the api endpoint bacame timing out and updating the cluster to a newer version did not fix it.
Also I could not reproduce the issue on a different eks cluster in the same region, but it still looks like an issue with no details of what happend.

@micahhausler
Copy link
Member

@micahhausler micahhausler commented Apr 11, 2019

@mycroftmih is your cluster still there? Can you open a support ticket so we can investigate?

@mycroftmih
Copy link

@mycroftmih mycroftmih commented Apr 12, 2019

I still have the cluster, it's in active state but inaccesibile. It's a testing account with Basic support, I don't think I will be able to open a support tiket from there. There was an automatic case opened by AWS but I didn't see it in time, so it's closed (5920666461).

@mgonzalezarkho
Copy link

@mgonzalezarkho mgonzalezarkho commented Apr 22, 2019

Hello, @tabern actually I created an ec2 instance with Rancher server installed on it inside VPC A, in VPC B, I deployed an EKS Cluster through Rancher UI. By default, it creates the API server endpoint public so I went to the EKS Console and changed the API server endpoint to private. After that, I create the Transit gateway so I could route traffic between the two VPC however I still without seeing my EKS Cluster in the Rancher console. The Rancher Server console shows me the error that can't resolve the connection between my server and my EKS Cluster. Any suggestion?? Does this feature could help me with this ??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
containers-roadmap
  
Just Shipped
Linked pull requests

Successfully merging a pull request may close this issue.

None yet