Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Feature: Integration with IAM #23580

Closed
justinsb opened this issue Mar 29, 2016 · 22 comments
Closed

AWS Feature: Integration with IAM #23580

justinsb opened this issue Mar 29, 2016 · 22 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/auth Categorizes an issue or PR as relevant to SIG Auth.

Comments

@justinsb
Copy link
Member

It would be great to be able to configure IAM permissions on a per-pod basis. We would then have a "mock" metadata service on 169.254.169.254, which would inject the correct IAM roles into the pod (or no roles at all).

I believe this can be implemented using the AWS Security Token Service:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

@justinsb justinsb added area/platform/aws kind/feature Categorizes issue or PR as related to a new feature. labels Mar 29, 2016
@justinsb
Copy link
Member Author

See also #14226

@guoshimin
Copy link
Contributor

We here at Databricks is doing something similar to https://github.com/dump247/ec2metaproxy:

  • We run a metadata proxy on each node, using DaemonSet
  • The proxy uses host network namespace
  • We DNAT calls to 169.254.169.254 in the PREROUTING chain to the proxy
  • The proxy checks the source IP against the pod list to identify the pod
  • Pods that need a role specify that in their annotations
  • The proxy calls sts::AssumeRole to get the temporary crendentials for the role and forwards these to the requesting container.
  • An authorization plugin enforces policies on what pods can use what roles

As @therc pointed out in this comment, it doesn't work for containers that use net=host, but those containers tend to perform admin functions and tend to be static, so they can use the role associated with the node itself.

@therc
Copy link
Member

therc commented Mar 31, 2016

@guoshimin did you say it's written in Scala?

@guoshimin
Copy link
Contributor

@therc It is indeed written in Scala.

@mgoodness
Copy link

Lyft recently released a similar project to ec2metaproxy, called metadataproxy. Announcement here. Haven't tried it myself, but might be worth investigating.

@ghost
Copy link

ghost commented Apr 11, 2016

cc @erictune FYI.
Do we need an area/iam label?

@ghost ghost added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Apr 11, 2016
@ghost ghost added this to the next-candidate milestone Apr 11, 2016
@erictune erictune added the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Apr 18, 2016
@gtaylor
Copy link
Contributor

gtaylor commented Jul 6, 2016

@guoshimin Is your scala implementation specifically for Kubernetes? If so, I don't suppose that it's open source? Keen to see how other people are tackling this in production.

@thomasdesr
Copy link

thomasdesr commented Jul 6, 2016

@gtaylor Shimin's co-worker here

Ours is specific for kubernetes as it relies on pod annotations to determine the appropriate role. Sadly it isn't open source, however @jtblin did write something very simliar and did open source it (https://github.com/jtblin/kube2iam) :D

Our internal one differs from kube2iam only in automatically running the needed iptables command if it isn't already set and requiring about 10-100x as much memory :P

Minor edit: Our AWS Role permissions are slightly more restricted than the examples in kube2iam, for example our minions don't have arbitrary assume-role permissions, they are only allowed to assume other roles that have established a trust relationship with them (that are inside the same account, cross-account isn't a problem we've needed to solve yet).

@jtblin
Copy link
Contributor

jtblin commented Jul 6, 2016

Author of https://github.com/jtblin/kube2iam, we've been using it in our environment for quite some time without issue. We also use a slightly more restricted set of permissions than the example in the README, we only give sts:AssumeRole to a specific role instead of root. I shall update the README.

The code is inspired from the original kube2sky, using similar APIs.

@evie404
Copy link
Contributor

evie404 commented Jul 21, 2016

Question regarding sts:AssumeRole and running proxies with DaemonSet:

It seems while we can use trust relationship to limit the number of roles the Kubnernetes workers are allowed to assume, it still means that Kubernetes workers, which runs all applications as Pods, can potentially expose all roles and is a single point of failure.

Instead, would it be possible that, instead of running the proxies with DaemonSet, run them as standalone instances, say iam-proxy, and route the Pods' 169.254.169.254 calls to them? This at least prevent containers with privileged access potentially gaining any roles.

@erictune
Copy link
Member

But a proxy would be single point of failure too. If it goes down, pods cannot refresh their creds via 169.254.169.254.

@erictune
Copy link
Member

It depends on whether you want better uptime or better security.

@evie404
Copy link
Contributor

evie404 commented Jul 21, 2016

I was thinking running multiple-instances of proxies behind a load balancer to provide redundancy. But at least this way, the proxies' roles are isolated.

@yissachar
Copy link

Is this planned for 1.4? Trying to decide whether to wait for this to be implemented or just go ahead and use https://github.com/jtblin/kube2iam.

@erictune
Copy link
Member

Not planned for 1.4.

On Wed, Aug 17, 2016 at 12:47 PM, yissachar notifications@github.com
wrote:

Is this planned for 1.4? Trying to decide whether to wait for this to be
implemented or just go ahead and use https://github.com/jtblin/kube2iam.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#23580 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudnOc5LrihsMRfDvpN6vaNKxtXEXZks5qg2VTgaJpZM4H61a1
.

@erictune
Copy link
Member

erictune commented Sep 21, 2016

@evie404
Copy link
Contributor

evie404 commented Sep 21, 2016

the docker iam proxies all assume the docker networking model of one IP per container. since they use IP to look up the associated containers, they will not work. kube2iam by contrast looks through the kubernetes API, so does not have this shortcoming. also, it should support rkt out of the box since it does not have dependency on the docker networking model nor the docker socket.

see: #14226 (comment)

@Vlaaaaaaad
Copy link

Any updates on this? What's the preferred way to grant permissions at a pod/container level on AWS?

@garrickpeterson-wf
Copy link

Also interested in this. When you want this it's one of those cases, to address earlier comments, where security trumps availability (though, couldn't availability be addressed via daemonsets or similar?).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 19, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 15, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

openshift-publish-robot pushed a commit to openshift/kubernetes that referenced this issue Aug 20, 2019
…conversion

Bug 1739085: Fix multi-version CRD and admission webhook

Origin-commit: 7cb70012ba344ae0e4617182682d26303cd2f9c0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/auth Categorizes an issue or PR as relevant to SIG Auth.
Projects
None yet
Development

No branches or pull requests