Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS-specific add-ons #53

Open
errordeveloper opened this issue Jun 7, 2018 · 13 comments

Comments

@errordeveloper
Copy link
Member

commented Jun 7, 2018

In no particular order we should consider the following:

Some of these add-ons require instance roles and pre-extisting resource (e.g. Route 53 zone for external DNS) etc, so we could ensure that's set up correctly and make things easy for users.

@errordeveloper errordeveloper added this to the 0.2.0 – add-ons milestone Jun 7, 2018

@StevenACoffman

This comment has been minimized.

Copy link
Contributor

commented Jun 7, 2018

Should also consider:

(oh gosh, please not kube2iam)
The cluster-autoscaler configured to work with AWS is also an important addition.

@errordeveloper

This comment has been minimized.

Copy link
Member Author

commented Jun 8, 2018

@StevenACoffman

This comment has been minimized.

Copy link
Contributor

commented Jun 8, 2018

The heptio authenticator allows IAM roles to be translated into kubernetes tokens/accounts, but not the other way around. If an app on a pod uses the aws sdk, it will call the ec2 metadata service and be granted the same iam role and permissions as the node.

Custom IAM roles require either injecting credentials via secrets, or intercepting the ec2 metadata service. kiam (or the more problematic kube2iam) are the current solutions until the community develops something better with SPIFFE and SPIRE perhaps.

The author of a new competing solution (iam4kube) drafted this comparison spreadsheet. To comment, join sig-aws google group.

@artemyarulin

This comment has been minimized.

Copy link

commented Jan 16, 2019

aws-alb-ingress-controller would be so nice to have - eksctl provides very nice experience for creating the cluster, but once you want to make anything public with ingress then it's all manual

@leakingtapan

This comment has been minimized.

Copy link

commented Feb 26, 2019

Same thing for CSI drivers:

It would great if user can optionally pick which driver they want and get it auto deployed along with cluster creation

@leakingtapan

This comment has been minimized.

Copy link

commented Mar 14, 2019

What's the status of this issue? Since CSI migration will be alpha in 1.14. The urge to install CSI driver will become more desired in 1.15

@whereisaaron

This comment has been minimized.

Copy link

commented Mar 18, 2019

Probably no rush then @leakingtapan, at the current release rate it will be ~12 months before EKS catches up to 1.15 😄

@jacobtomlinson

This comment has been minimized.

Copy link

commented Mar 19, 2019

@StevenACoffman

(oh gosh, please not kube2iam)

I would be keen to hear more of your thoughts about this. We have been using kube2iam for a while and I hadn't come across kiam. What are the pros/cons?

@StevenACoffman

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2019

@jacobtomlinson We encountered a lot of problems that were difficult to diagnose with kube2iam. The future of the kube2iam project seems to be uncertain and there are other issues logged around race conditions, etc. After some research, Kiam seems to be a valid alternative. The developers have written a post on their experience of Kube2iam and why they decided to write Kiam which goes into a lot of detail.

@jacobtomlinson

This comment has been minimized.

Copy link

commented Mar 19, 2019

Thanks for the detailed answer. Those references were exactly what I was looking for.

@whereisaaron

This comment has been minimized.

Copy link

commented Mar 19, 2019

I've evaluated kiam before, because the architectural approach is attractive, but (at that time) there were a some significant down-sides. Have this issues been resolved?

  1. The kaim master couldn't run on the same node as the kaim agents, because the agents blocked access to the AWS API. The suggestion is/was to run kaim server on the master nodes and run no agents there, but that isn't an option for managed control planes like EKS/GKE/AKE. And having a dedicated node-pool just for kiam seems like a large overhead.
  2. The kaim architecture required a CA and x509 certificates between the server and agents, but had no way to self-issue or - most importantly - rotate them, creating a significant manual maintenance overhead. I suggested using cert-manager with a private CA to fix this, but I don't know if that or similar was ever implemented?
  3. The required IAM roles and trust relationships were significantly more complex to configure and manage than kube2iam.

I've previously read the articles regards reported problems with kube2iam but - perhaps at our scale - we haven't seen any issues. And most common cited issues seem to have been patched, and the project continues to enjoys wide community support.

I definitely still think kiam is attractive, but the additional architectural complexity and management overhead have kept me away, since kube2iam works just great in our experience.

@mcfedr

This comment has been minimized.

Copy link
Contributor

commented Mar 28, 2019

aws-alb-ingress-controller would be so nice to have - eksctl provides very nice experience for creating the cluster, but once you want to make anything public with ingress then it's all manual

@artemyarulin I had a simpilar thought, and 30 mins free :) #675

@artemyarulin

This comment has been minimized.

Copy link

commented Mar 28, 2019

@mcfedr That is awesome - thank you! 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
7 participants
You can’t perform that action at this time.