-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNS resolution for EKS Private Endpoints #221
Comments
@tabern Thanks for creating a separate issue for this. We want to see if there is a temporary way to tackle this issue assuming we definitely want to use the setting of disable public access and enable private access. At this point, even if we have to setup some extra hacks, would something like below work? Do you see any issues?
|
I wanted to add that it might be beneficial for the EKS cluster api endpoints to appear as a target for Route53 Alias records. We are able to resolve a Route53 private hosted zone over VPN but using a CNAME created in this private zone doesn't help us resolve the shadow |
@nurus did you try creating ALIAS records for the EKS endpoint on the Route53 private hosted zone? |
@ikester, I tried to create it manually through the console but was unable to but this probably because there is no |
@nurus Did it work, it will be great if you give more information on how you made it work. |
@rajarajanpsj, unfortunately not. AWS does not provide the hosted zone of the EKS cluster api endpoint which is a required parameter to create an ALIAS record. |
Hello. Overall the change is very nice. |
what's the concern for just making the DNS entry public? I think that would solve all these issues. |
I agree with @joshkurz it worked for RDS, if your RDS instance is privately exposed your RDS endpoint DNS name is resolvable publicly to a private IP within your VPC. |
One way to workaround this issue is a HTTPS/CONNECT proxy (e.g. tinyproxy) running inside the VPC, which allows connecting to the kubernetes private endpoint but without asking the VPC DNS from the client side. But thats not a nice solution. It would be really really good if the authorative DNS-servers for sk1.eu-west-1.eks.amazonaws.com would also have a knowledge about the private endpoint records and not only the VPC internal DNS. |
We managed to solve this by using Route53 Resolver. Basically, you want an Inbound Endpoint in your EKS cluster VPC, an Outbound Endpoint in your peered VPC, and a rule associated to your peered VPC that forwards your cluster domain name requests to the Inbound Endpoint ip addresses. Don't forget to allow UDP 53 on your cluster security group for your peered VPC outbound endpoint ip addresses, and to check your on-going Network ACL rules. |
@cansong genius! Would be cool to get more native support but best workaround I've seen so far! |
@cansong That's a good solution if you can have peered VPCs like that. We've been documenting that approach and will be releasing it soon to help customers for whom the multi-VPC solution works. |
For all the others who cannot do the the solution mentioned by @cansong, we really need the zones being public (or having an option to make the zone public or private), as written by #221 (comment) :) |
We're using a transit gateway to route between our VPCs. Is there a workaround using resolvers (or any workaround at all!) that we could try out? |
All - wanted to let you know we published a blog that details how to enable resolution for private endpoints when using a peered VPC or Direct Connect: https://aws.amazon.com/blogs/compute/enabling-dns-resolution-for-amazon-eks-cluster-endpoints/ This has been developed and test by AWS solution architects and the EKS team. Let us know if it works or doesn't work for you. We'll be keeping this issue open as we consider delivery of this feature to be when resolution works automatically for EKS clusters and the team is continuing to develop a integrated solution for private endpoint DNS resolution. |
Unfortunately (and as mentioned), the above solution eventually does not help if people/pods/... still query e.g. 8.8.8.8 (and if you don't intercept this traffic). An example: The coredns-pod has this config:
That means: Even with a solution as mentioned by @tabern and even if you are inside the VPC - if you have e.g. 2 EKS clusters (cluster 1 and cluster 2) and you do operations from cluster 1 against the API of the cluster 2, you could end up with not being able to resolve the API endpoint of that cluster 2. This happens if your in-cluster coredns of cluster 1 forwards the request to 8.8.8.8 to resolve the API of the cluster 2. Sure, the settigs for coredns could be changed to not use (remove) 8.8.8.8 (and also, the resolv.conf has Note: The above applies to EKS kubernetes 1.11. I have not testet with 1.12 - maybe the default coredns-config is different there. |
Creating a conditional forwarding rule for each private API endpoint seems very burdensome, hopefully what @joshkurz suggested is being considered as that seems like the most hands off solution to me. in our case there are some controls around usage of the appliance (infoblox) where DNS is managed on prem where these forwarding rules would be programmatically created so it is difficult for us to automate that after creating these clusters. |
@tabern I just used the blog post to setup dns resolution for EKS from our on-premise network using a VPN connection and it works fine. So thanks for the guide. However, as previously mentioned it would be a lot easier if you were just able to choose whether or not you want the DNS zones to be public or private available and control the access using a security group. |
@tabern the proposed solution is good when it is allowed to modify the DNS resolution. Unfortunately we cannot do this due to security policies. It would be great to have the ability to assign a custom hostname/FQDN for the API endpoint which could be managed simply via Route53. Of course the server certificate should include the custom name(s). |
Still having issues resolving the private endpoint on a few of my clusters, while others are working as expected. Interestingly its the 1.14 clusters that don't work and the 1.12 that do, but this could just be a coincidence. |
@dmateos we're still backfilling across the fleet for existing clusters. Have you tried enabling then disabling the public endpoint? This would force the update for those clusters. |
@tabern is there any way to tell (platform version?) whether your cluster has been updated to support this? |
It should be automatically enabled with eks.7. However, you can enable it for any cluster immediately by toggling the public/private endpoint settings (turn on public endpoint then disable). |
Thanks @tabern . Just tested out the same, works beautifully. |
Hey @tabern , Is there a way to force an existing cluster to use the latest platform version |
No, there is not. You'll need to create a new cluster or wait for the cluster to be updated. |
I have one question to access EKS Cluster with different VPC network. |
@rishabh1635 Have you made sure the EKS (master) security group allows 443 traffic from that VPC ? |
@rishabh1635 and also it's not clear if all necessary route tables are configured between the peered networks and if there are some ACL rules. Please double-check all common potential issues first. |
This seems to work if "only private" endpoints are enabled and not if both public and private are enabled. My usecase: In this case the dns resolution doesn't seem to work from the peered VPC. However it works when I disable public endpoint completely. Any workarounds? Did anyone get this working? |
@vivekanandg Is there anyway to access EKS with both endpoint that means via private and public endpoint. Because I want to deploy eks pods via jenkins server only with private endpoint and access the kubernetes cluster via public endpoint (a set of whitelisted ips). |
@vivekanandg the assumption is that if the public endpoint is enabled, can access the cluster using it. Why do you have a requirement to access the endpoint through a peered VPC when the public is available? |
@tabern You are right, vpc peering isn't necessary. Didn't realise that the nat gateway public ip (Elastic) could be whitelisted. It's working fine now. |
@tabern I’m curious based on this scenario. If you have an account with VPC that’s completely isolated except for VPC peering. The second account has EKS with private and public. Then the first account would need the private IP for resolution since it wouldn’t have public gateway egress. Right? |
I'd like to understand what happens if the selected subnet zone for the kube api endpoint private ip has an outage. Will the DNS entry resolves dynamically to another zone private ip? Is my cluster endpoint resilient to failures within a zone? I can't find the information about that. |
Hi I have a similar use case as to @rishabh1635 , however our application/s that require "internal" access to the EKS cluster API, do not have static external IPs that we can whitelist (they are dynamic) So is there away around this to have both external and internal endpoints, while allowing internal DNS resolution across accounts that are connected via transit gateways (not vpc peering) Thanks |
Hi, I have a similar issue as @rhysxevans and @cdenneen mentioned. I have an EKS cluster in VPC-A (region 1) that has both public and private enabled. I am accessing the Public API from some whitelisted IPs that are on prem for example which is working fine. I then have another VPC (lets call it VPC-B) which is in another region (region 2). I have a machine in a subnet in this VPC-B and from this machine I want to use the private KUBE API endpoint. It is not possible for me to use the Public API endpoint since this subnet does not have a NAT gateway with a public Elastic IP which I can whitelist. I also have some machines with a 3rd party provider not in AWS. I am using CVPN to connect these machines to VPC-A. When trying to use the Kube API endpoint it tries to resolve to public API and that fails. I would like to use the private endpoint here as well. I would really appreciate any help here please. Thanks and looking forward for your reply. |
Hello, |
This isn't possible today. When the public endpoint is enabled, public endpoint DNS returns the public IP. When public is disabled, that's when we return the private IP when hitting the public endpoint DNS. |
It confirms what we observed. My question was more: is there a plan to make it possible and/or is there a workaround? |
@yann-soubeyrand You can try to follow the steps in this document to use Route 53 Resolver to forward DNS queries from the Cluster A VPC against the Cluster B domain name, to the Cluster B VPC DNS Resolver. https://aws.amazon.com/blogs/compute/enabling-dns-resolution-for-amazon-eks-cluster-endpoints/ |
Thanks @sethatron! |
separate DNS entry for Private IP would definitely made things easier. |
Is there any update on this particular issue? We have a business case where the client only wants connections to the API endpoint to be private through the VPN (not S2S or AWS ClientVPN, it's an OpenVPN), while we need to allow certain services to access the API through a public IP. Currently we don't see an out of the box solution for a perfectly reasonable use-case. |
I have configured openVpn, now I am able to ping private node groups in a VPC but unable to fetch EKS cluster objects using kubectl get svc. Please help on this. |
Automatic DNS resolution for EKS private endpoint private hosted zone within VPC. This allows fully direct connect functionality from on-prem computers to Kubernetes cluster API server that is only accessible within a VPC.
Follow-up feature from #22
Edit 12-13-19: This feature is now available
The text was updated successfully, but these errors were encountered: