-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS Feature: Integration with IAM #23580
Comments
See also #14226 |
We here at Databricks is doing something similar to https://github.com/dump247/ec2metaproxy:
As @therc pointed out in this comment, it doesn't work for containers that use net=host, but those containers tend to perform admin functions and tend to be static, so they can use the role associated with the node itself. |
@guoshimin did you say it's written in Scala? |
@therc It is indeed written in Scala. |
Lyft recently released a similar project to ec2metaproxy, called metadataproxy. Announcement here. Haven't tried it myself, but might be worth investigating. |
cc @erictune FYI. |
@guoshimin Is your scala implementation specifically for Kubernetes? If so, I don't suppose that it's open source? Keen to see how other people are tackling this in production. |
@gtaylor Shimin's co-worker here Ours is specific for kubernetes as it relies on pod annotations to determine the appropriate role. Sadly it isn't open source, however @jtblin did write something very simliar and did open source it (https://github.com/jtblin/kube2iam) :D Our internal one differs from kube2iam only in automatically running the needed iptables command if it isn't already set and requiring about 10-100x as much memory :P Minor edit: Our AWS Role permissions are slightly more restricted than the examples in kube2iam, for example our minions don't have arbitrary assume-role permissions, they are only allowed to assume other roles that have established a trust relationship with them (that are inside the same account, cross-account isn't a problem we've needed to solve yet). |
Author of https://github.com/jtblin/kube2iam, we've been using it in our environment for quite some time without issue. We also use a slightly more restricted set of permissions than the example in the README, we only give The code is inspired from the original |
Question regarding It seems while we can use trust relationship to limit the number of roles the Kubnernetes workers are allowed to assume, it still means that Kubernetes workers, which runs all applications as Pods, can potentially expose all roles and is a single point of failure. Instead, would it be possible that, instead of running the proxies with DaemonSet, run them as standalone instances, say |
But a proxy would be single point of failure too. If it goes down, pods cannot refresh their creds via 169.254.169.254. |
It depends on whether you want better uptime or better security. |
I was thinking running multiple-instances of proxies behind a load balancer to provide redundancy. But at least this way, the proxies' roles are isolated. |
Is this planned for 1.4? Trying to decide whether to wait for this to be implemented or just go ahead and use https://github.com/jtblin/kube2iam. |
Not planned for 1.4. On Wed, Aug 17, 2016 at 12:47 PM, yissachar notifications@github.com
|
Looks like there are several of these: |
the docker iam proxies all assume the docker networking model of one IP per container. since they use IP to look up the associated containers, they will not work. see: #14226 (comment) |
Any updates on this? What's the preferred way to grant permissions at a pod/container level on AWS? |
Also interested in this. When you want this it's one of those cases, to address earlier comments, where security trumps availability (though, couldn't availability be addressed via daemonsets or similar?). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
…conversion Bug 1739085: Fix multi-version CRD and admission webhook Origin-commit: 7cb70012ba344ae0e4617182682d26303cd2f9c0
It would be great to be able to configure IAM permissions on a per-pod basis. We would then have a "mock" metadata service on 169.254.169.254, which would inject the correct IAM roles into the pod (or no roles at all).
I believe this can be implemented using the AWS Security Token Service:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
The text was updated successfully, but these errors were encountered: