Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding IAM roles to the cred chain #115

Closed
wants to merge 5 commits into from
Closed

Conversation

jduv
Copy link

@jduv jduv commented Jan 3, 2019

This PR supports backwards compatibility--that is we won't break any existing usage of the S3 resource and allows those of us with configured IAM roles to not have to store sensitive AWS key information in a cred store or pass it into the jobs. Yes, a cred store is an excellent place for those kinds of things, but per AWS best practices using an IAM role is a better solution than static credentials (many reasons why).

This implementation is simple and straightforward--we've been using this in production since this morning.

All tests pass.

https://cloud.docker.com/u/7factor/repository/docker/7factor/s3-resource

Signed-off-by: jduv <jduv@7factor.io>
Signed-off-by: jduv <jduv@7factor.io>
Signed-off-by: jduv <jduv@7factor.io>
Signed-off-by: jduv <jduv@7factor.io>
@topherbullock
Copy link
Member

topherbullock commented Jan 3, 2019

@jduv How do the credentials get propagated from the host EC2 instance and added to the resource container so that the AWS SDK will pick them up?

The SDK docs for NewSession describe a few possible ways: https://github.com/aws/aws-sdk-go/blob/master/aws/session/session.go#L116-L131, but I'm not sure any of the necessary config files or environment config will propagated to the container.

Maybe I'm missing some magic here whereby AWS's SDK somehow knows which EC2 instance it is in, even when in a container on that instance, but I'm not seeing it.

@cirocosta
Copy link
Member

IIRC,

The SDK gathers the credentials from the EC2 metadata server that should be accessible by any machines under EC2 (unless blocked by the admin). Is that right?

thx!

@topherbullock
Copy link
Member

@cirocosta ohhhhh! I dug a bit into how instance profiles work and this does seem to be the case.

This handy article came up and explained how to block containers from accessing it: https://ops.tips/blog/blocking-docker-containers-from-ec2-metadata/

I doubt there are defaults in Garden which explicitly set up any blocking, so this should "just work". Though I'm now wondering whether it makes sense for us to document how this works in Garden, and explicitly allow admins to block containers from accessing instance metadata.

@topherbullock
Copy link
Member

@jduv Any thoughts on an integration test for this?

@jduv
Copy link
Author

jduv commented Jan 6, 2019

@topherbullock I'll look into it next week. It'll very likely be a pain in the behind :D.

@cirocosta
Copy link
Member

cirocosta commented Jan 7, 2019

Hi @jduv,

I just tested the code here, and it seems to be running well!

(you can check the testbed here - https://github.com/cirocosta/concourse-s3-resource-iam-roles-sample)

I wonder if instead of panic()ing there in the session.Must, we could gently present a better message.

For instance, running the code under gcp, leads to:


stderr:
Using default credential chain for authentication.
�[31merror listing files: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors
�[0m

I'm not entirely sure how we do with other resources, but eventually giving a more explicit reason could help?

Wdyt?

cc @topherbullock

@cirocosta
Copy link
Member

cirocosta commented Jan 7, 2019

Hey @vito ,

I noticed that this repo lacks the label (approved-for-ci) to get the PR going through the pipeline for the integration tests that the resource has (https://github.com/concourse/concourse/blob/c862322702eea2fd561b1a04de18ab1c1ef4810b/ci/pipelines/resources/template.jsonnet#L259).

Is that intentional?

thx!

@vito
Copy link
Member

vito commented Jan 10, 2019

@cirocosta That label is required so that we eyeball the PR and make sure it doesn't do anything nefarious, as the PR build runs with configured access to a test bucket that we don't want to leak (or have exploited).

I'll add the label now.

@vito
Copy link
Member

vito commented Jan 10, 2019

As mentioned in concourse/concourse#3023 though I'm going to put this on hold until we figure out how we want to support IAM roles in the context of concourse/concourse#2386.

@bhutkovskyysos
Copy link

Any progress on this?

@nbrys
Copy link

nbrys commented Sep 24, 2019

Any progress on this issue?

@jduv
Copy link
Author

jduv commented Sep 24, 2019

If you want to temporarily use the container that we've built with these fixes you can use it like this:

resource_types:
  - name: s3
    type: docker-image
    source:
      repository: 7factor/s3-resource

https://cloud.docker.com/u/7factor/repository/docker/7factor/s3-resource

@vito
Copy link
Member

vito commented Mar 25, 2020

For those following along, I've given an update here: concourse/concourse#3023 (comment)

@vito vito self-assigned this Apr 17, 2020
Rebase from original master.
@vito
Copy link
Member

vito commented Jun 4, 2020

Rather than leaving this open forever I'm going to close it out since it won't be merged either way.

concourse/concourse#3023 describes the reason we can't merge PRs like this, and concourse/concourse#3023 (comment) proposes a solution but we still need to work out the details. I hope to find time for this next week.

Thanks again for submitting this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants