Skip to content
This repository has been archived by the owner on Sep 26, 2021. It is now read-only.

EC2 credentials could expire #533

Closed
sthulb opened this issue Feb 12, 2015 · 19 comments · Fixed by #3799
Closed

EC2 credentials could expire #533

sthulb opened this issue Feb 12, 2015 · 19 comments · Fixed by #3799

Comments

@sthulb
Copy link
Contributor

sthulb commented Feb 12, 2015

Under certain situations, credentials could expire for the EC2 driver, i.e. when using roles.

We could perhaps mitigate it by checking credentials and then either asking for more or using env vars.

@nathanleclaire
Copy link
Contributor

Good call out - this seems like something which could potentially be mitigated using config (as in docker-machine config Drivers.AmazonEc2.AmazonSecretAccessKey mykey to update the current key that you are using). It's good practice to rotate keys regularly so I'm +1 on thinking about and covering this case well.

@sthulb
Copy link
Contributor Author

sthulb commented Feb 12, 2015

What do we do in the event of multiple accounts?

Keep track of the account ID the instances are in and then match them against the config so we end up with a config key of: Drivers.EC2.[12 digit account ID].[KeyType]?

@ehazlett
Copy link
Contributor

Why wouldn't we just throw an error and let the user present new
credentials?

If we want to add a grouping construct we would need additional options at
create time as well.

On Thu, Feb 12, 2015 at 12:14 PM, Simon Thulbourn notifications@github.com
wrote:

What do we do in the event of multiple accounts?

Keep track of the account ID the instances are in and then match them
against the config so we end up with a config key of: Drivers.EC2.[12
digit account ID].[KeyType]?


Reply to this email directly or view it on GitHub
#533 (comment).

@sthulb
Copy link
Contributor Author

sthulb commented Feb 12, 2015

If we just throw errors things like the ls command break. I'd rather just try to detect them again.

@ehazlett
Copy link
Contributor

Isn't that going to do the same thing if you cannot detect the new ones?

@PavelVanecek
Copy link

I just ran into this issue. In our configuration, AWS tokens expire after ~30 minutes. I can however see the old credentials cached in ~/.docker/machine/machines/mymachinename/config.json. This is effectively prohibiting us from using amazon EC2 driver.

If I delete the credentials from JSON, I get error EmptyStaticCreds: static credentials are empty.

Relates to #2739 and #1840

@nathanleclaire
Copy link
Contributor

I wonder if we could modify the AWS driver to support temporary tokens. I might be mis-remembering, but I think @jeffellin may have done some research in this regard.

@jeffellin
Copy link
Contributor

So this is a little nuanced depending on if you want to break backwards compatibility. The truth is now that you are using the aws sdk you don't need to store credentials at all. The sdk has a pre-determined notion of where they should be, env variables, .aws dir, etc. So my recommendation is to stop reading the credentials from the config file and just use whats in the environment. This should even enable STS tokens to work without issues since the sdk knows already how to work with those.

The issue with making this change is that users who don't have their amazon environment configured will need set them up otherwise an upgrade will break them.

I personally think its a Bad Bad idea to be storing the IAM credentials in the config file. For the cloud formation driver I created, I do not do this. I just pick up whatever the aws sdk can find.

@fffergal
Copy link

Coming up against this today too. I like the idea of just using the standard AWS places - env vars and ~/.aws &c. Would also be cool to use region from there too.

Can confirm that changing the config.json with new credentials works fine.

@tlvince
Copy link

tlvince commented Apr 14, 2016

I ran into this with the following error:

AuthFailure: AWS was not able to validate the provided access credentials

It wasn't clear to me how to resolve this until I'd remembered my AWS tokens had been rotated. I'd expected auth to work after (re)running aws configure to no avail. After poking around in the machine's config, I was surprised to see my AWS SecretKey; I'd expected these to be read from ~/.aws/credentials. This also brings another consideration for us regarding sharing config's between our team (via #23).

@jeffellin
Copy link
Contributor

This is pretty easy to fix. But we will break backwards compatibility. I
think we should do it sooner than later. We should just use aws credentials
I am willing to do the code change for this.
On Thu, Apr 14, 2016 at 4:55 AM Tom Vincent notifications@github.com
wrote:

I ran into this with the following error:

AuthFailure: AWS was not able to validate the provided access credentials

It wasn't clear to me how to resolve this until I'd remembered my AWS
tokens had been rotated. I'd expected auth to work after (re)running aws
configure to no avail. After poking around in the machine's config, I was
surprised to see my AWS SecretKey; I'd expected these to be read from
~/.aws/credentials. This also brings another consideration for us
regarding sharing config's between our team (via #23
#23).


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#533 (comment)

@nathanleclaire
Copy link
Contributor

@jeffellin When you say it will break backwards compatibility, what do you mean?

It seems reasonable to me that we could at least add a section in the NewDriver method for amazonec2 which might re-read credentials from ~/.aws/credentials and then over-write them in the Driver struct. We'd have to ensure some kind of way to make sure they get saved though, which would be the tricky bit -- only some commands actually save the config after running.

@jeffellin
Copy link
Contributor

jeffellin commented Apr 14, 2016

@nathanleclaire I'd really like to get the creds out of the Driver Struct all together, it isn't necessary and I feel its bad practice as most people are unaware that we are saving potentially sensitive information in that location. The Go AWS SDK automatically reads from either AWS env variables or the ~/.aws/credentials location

@jeffellin
Copy link
Contributor

jeffellin commented Apr 14, 2016

@nathanleclaire if the user is currently using the aws driver and doesn't have the credentials set either in AWS_ environment variables or in ~/.aws/credentials' The driver will stop working despite a valid credential being in `.docker/machine'

svc := ec2.New(session.New()) is all you need to force the AWS client to look for token in all the standard places including STS temporary tokens.

@nathanleclaire
Copy link
Contributor

nathanleclaire commented Apr 14, 2016

@nathanleclaire I'd really like to get the creds out of the Driver Struct all together, it isn't necessary and I feel its bad practice as most people are unaware that we are saving potentially sensitive information in that location. The Go AWS SDK automatically reads from either AWS env variables or the ~/.aws/credentials location

Yes, I agree, and I have been arguing for a proper "provider" model to help with this for a long time. As you noted, unfortunately it would likely require a sizable backwards incompatible change, breaking pretty much all existing drivers / plugins and requiring a version bump to the RPC API. It's difficult. We really need to tread wisely on that one. I agree the architectural choices made historically were not ideal but we do have quite a few downstreams (e.g. Rancher) to consider. I'm very happy to start discussing some proposals for a new and improved interface and implementation though.

In the interim, if we can provide a way for users to refresh the credentials saved in config.json, we should do that as I think it would require much less effort and backwards-incompatible changes.

@nathanleclaire if the user is currently using the aws driver and doesn't have the credentials set either in AWS_ environment variables or in ~/.aws/credentials' The driver will stop working despite a valid credential being in `.docker/machine'

I don't understand. Is this an existing issue today, or a hypothetical?

@jeffellin
Copy link
Contributor

It's an issue that would result from not reading the credentials from the
.docker/machine file and instead using the aws environment.

For Amazon I don't think we really need a sizable change to the way drivers
store credential as we can easily just punt and let the GO SDK handle it.
This is what I have been doing with the cloud formation driver I created.

If we were to make this change the only thing a user would need to do is
make sure that they run aws configure or set the appropriate environment
variables. Requiring this step is really the breaking change from the
previous release.

On Thursday, April 14, 2016, Nathan LeClaire notifications@github.com
wrote:

@nathanleclaire https://github.com/nathanleclaire I'd really like to
get the creds out of the Driver Struct all together, it isn't necessary and
I feel its bad practice as most people are unaware that we are saving
potentially sensitive information in that location. The Go AWS SDK
automatically reads from either AWS env variables or the ~/.aws/credentials
location

Yes, I agree, and I have been arguing for a proper "provider" model to
help with this for a long time. As you noted, unfortunately it would likely
require a sizable backwards incompatible change, breaking pretty much all
existing drivers / plugins and requiring a version bump to the RPC API.
It's difficult. We really need to tread wisely on that one. I agree the
architectural choices made historically were not ideal but we do have quite
a few downstreams (e.g. Rancher) to consider. I'm very happy to start
discussing some proposals for a new and improved interface and
implementation though.

@nathanleclaire https://github.com/nathanleclaire if the user is
currently using the aws driver and doesn't have the credentials set either
in AWS_ environment variables or in ~/.aws/credentials' The driver will
stop working despite a valid credential being in `.docker/machine'

I don't understand. Is this an existing issue today, or a hypothetical?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#533 (comment)

@mjiderhamn
Copy link

mjiderhamn commented Jul 19, 2016

Probably something I'm not seeing here, but maybe backwards compatibility be sufficiently handled by:

  • for any new machine created, do not store credentials in ~/.docker/machine but rely on the Go AWS SDK
  • for any existing machine with credentials already in ~/.docker/machine, use those

Then "upgrading" a machines credentials would be as simple as removing them from ~/.docker/machine, which could be done either manually or via a command or separate tool/script.

@ghost
Copy link

ghost commented Sep 22, 2016

Any resolution on this?

@mjiderhamn
Copy link

I have started working on at least the first step, to default to AWS SDK credentials in case there are none in config.json. I believe only a small change here is required, though setting up the dev environment on Windows has been non-trivial.

mjiderhamn pushed a commit to mjiderhamn/docker-machine that referenced this issue Sep 26, 2016
If AWS credentials are not stored in machines config.json, use AWS SDK default. First step to fixing docker#533.

Signed-off-by: Mattias Jiderhamn <mattias.jiderhamn@readsoft.com>
mjiderhamn pushed a commit to mjiderhamn/docker-machine that referenced this issue Sep 26, 2016
If AWS credentials are not stored in machines config.json, use AWS SDK default. First step to fixing docker#533.

Signed-off-by: Mattias Jiderhamn <mattias.jiderhamn@readsoft.com>
mjiderhamn pushed a commit to mjiderhamn/docker-machine that referenced this issue Sep 26, 2016
If AWS credentials are not stored in machines config.json, use AWS SDK default. First step to fixing docker#533.

Signed-off-by: Mattias Jiderhamn <mattias.jiderhamn@readsoft.com>
mjiderhamn pushed a commit to mjiderhamn/docker-machine that referenced this issue Sep 26, 2016
If AWS credentials are not stored in machines config.json, use AWS SDK default. First step to fixing docker#533.

Signed-off-by: Mattias Jiderhamn <mattias.jiderhamn@readsoft.com>
mjiderhamn pushed a commit to mjiderhamn/docker-machine that referenced this issue Sep 26, 2016
If AWS credentials are not stored in machines config.json, use AWS SDK default. First step to fixing docker#533.

Signed-off-by: Mattias Jiderhamn <mattias.jiderhamn@readsoft.com>
mjiderhamn pushed a commit to mjiderhamn/docker-machine that referenced this issue Sep 26, 2016
If AWS credentials are not stored in machines config.json, use AWS SDK default. First step to fixing docker#533.

Signed-off-by: Mattias Jiderhamn <mattias.jiderhamn@readsoft.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants