Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[EKS] [request]: VPC endpoint support for EKS API #298

Open
tdmalone opened this issue May 20, 2019 · 11 comments
Open

[EKS] [request]: VPC endpoint support for EKS API #298

tdmalone opened this issue May 20, 2019 · 11 comments
Labels

Comments

@tdmalone
Copy link

@tdmalone tdmalone commented May 20, 2019

Tell us about your request
VPC endpoint support for EKS, so that worker nodes that can register with an EKS-managed cluster without requiring outbound internet access.

Which service(s) is this request for?
EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
Worker nodes based on the EKS AMI run bootstrap.sh to connect themselves to the cluster. As part of this process, aws eks describe-cluster is called, which currently requires outbound internet access.

I'd love to be able to turn off outbound internet access but still easily bootstrap worker nodes without providing additional configuration.

Are you currently working around this issue?

  • Providing outbound internet access to worker nodes; OR
  • Supplying the cluster CA and API endpoint directly to bootstrap.sh.

Additional context

  • Relates somewhat to #22 & #221, but for the AWS EKS API rather than the Kubernetes control plane API
@tdmalone tdmalone added the Proposed label May 20, 2019
@tabern tabern added the EKS label May 21, 2019
@devonkinghorn

This comment has been minimized.

Copy link

@devonkinghorn devonkinghorn commented Jan 23, 2020

Is there any news on this?

@michael-burt

This comment has been minimized.

Copy link

@michael-burt michael-burt commented Jan 23, 2020

Any updates on this issue?

@mikestef9

This comment has been minimized.

Copy link
Contributor

@mikestef9 mikestef9 commented Jan 24, 2020

If you use EKS Managed Nodes, the bootstrapping process avoids the aws eks describe-cluster API call, so you can launch workers into a private subnet without outbound internet access as long as you setup the other required PrivateLink endpoints correctly.

@michael-burt

This comment has been minimized.

Copy link

@michael-burt michael-burt commented Jan 24, 2020

Thanks Mike. Unfortunately managed nodes are not an option because they cannot be scaled to 0. We run some machine learning workloads that require scaling up ASGs with expensive VMs (x1.32xlarge) and we need to be able to scale them back to 0 once the workloads have completed.

@mikestef9

This comment has been minimized.

Copy link
Contributor

@mikestef9 mikestef9 commented Jan 24, 2020

Thanks for the feedback. Can you open a separate GH issue with that feature request for Managed Node Groups?

Will keep this issue open as it's something we are researching.

@dsw88

This comment has been minimized.

Copy link

@dsw88 dsw88 commented Jan 28, 2020

@mikestef9 I'm interested in the managed nodes solution. What do you mean by "you can launch workers into a private subnet without outbound internet access as long as you setup the other required PrivateLink endpoints correctly"?

Which PrivateLink endpoints are you referring to? Just the other service endpoints such as SQS and SNS that the applications running on the cluster may happen to use? Or do you mean that there are particular PrivateLink endpoints required to run EKS in private subnets with no internet gateway?

@mikestef9

This comment has been minimized.

Copy link
Contributor

@mikestef9 mikestef9 commented Jan 28, 2020

Hi @dsw88,

In order for the worker node to join the cluster, you will need to configure VPC endpoints for ECR, EC2, and S3

See this GH repo https://github.com/jpbarto/private-eks-cluster created by an AWS Solutions Architect for a reference implementation. Note that only 1.13 and above EKS clusters have a kubelet version that is compatible with the ECR VPC endpoint.

@dsw88

This comment has been minimized.

Copy link

@dsw88 dsw88 commented Feb 3, 2020

@mikestef9 Thanks so much for the info, and thanks for the pointer to the private EKS cluster reference repository!

I have one final question that I'm having a hard time figuring out how to deal with: How can I configure other hosts in this same private VPC to be able to talk to the cluster? Knowing the private DNS name isn't a huge deal, because I can just hard-code it into whatever needs to talk to the cluster. A bigger problem, however, is how a host in the private VPC can authenticate with the cluster.

Currently when I use the AWS API to set up a kubeconfig with EKS, it includes the following snippet in the generated kubeconfig file:

- name: arn:aws:eks:REGION:ACCOUNT_ID:cluster/CLUSTER_NAME
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - REGION
      - eks
      - get-token
      - --cluster-name
      - CLUSTER_NAME
      command: aws
      env: null

As you can see, it called the EKS API to get a token that authenticates it with the cluster. That definitely presents a problem since my hosts in the private VPC also don't have access to the EKS API. Is there another way that I can authenticate to the cluster without EKS API access?

@zucler

This comment has been minimized.

Copy link

@zucler zucler commented Feb 7, 2020

See this GH repo https://github.com/jpbarto/private-eks-cluster created by an AWS Solutions Architect for a reference implementation. Note that only 1.13 and above EKS clusters have a kubelet version that is compatible with the ECR VPC endpoint.

It seems that this repo uses unmanaged nodes though. I tried deploying it and it brought up a cluster without any nodes listed under the EKS web console. Is this correct?

@vranystepan

This comment has been minimized.

Copy link

@vranystepan vranystepan commented Feb 10, 2020

@mikestef9 Thank you very much for this clue. Now I have a working setup with managed worker groups and no access to the Internet 🎉

I was not sure if it's feasible as the documentation says:

Amazon EKS managed node groups can be launched in both public and private subnets. The only requirement is for the subnets to have outbound internet access. Amazon EKS automatically associates a public IP to the instances started as part of a managed node group to ensure that these instances can successfully join a cluster.

Well, apparently it is. If someone needs working Terraform recipes, ping me stepan@vrany.dev.

@mikestef9

This comment has been minimized.

Copy link
Contributor

@mikestef9 mikestef9 commented Feb 10, 2020

@vranystepan great to hear you have this working. As part of our fix for #607 we will make sure to get our documentation updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
8 participants
You can’t perform that action at this time.