Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
[aws] Avoid writing secrets to user-data #31
I was trying to think of ways to get the key pushed to the instance without writing it via user-data (AWS-only). The concern stems from people/tools having access to the DescribeInstanceAttribute API call which will gladly return this back.
The possible approach I was considering was:
For (2), another option is to add a secret-fetch systemd service which does this, but then figuring out how to get aws cli running on the instance is another problem.
Another alternative is to write it to S3, and fetch it from S3 via the
You are absolutely right. It's yet another of my oversight from the time I was POC'ing this project & never thought of coming back to this, unfortunately.
S3 support for Ignition is something I pushed CoreOS hard to implement back in Spring 2017 when I was still working there actually - it was a blocking issue for Tectonic.
It should be quite straightforward to rely on the S3 ACLs / EC2 IAM instance profile as well as the server-side encryption + TLS upload for S3 (except if you want to have your own keys - which is fine too).
The extra resources created are:
If we're okay with pushing stuff to S3, an easier solution would be to push the entire ignition config to S3 instead:
(This has the added advantage of staying within the user-data size limits)
Okay with either approach, whichever you think works better. The question I had was with concerns around the creation of the bucket: Should this be in the module here?
I haven't dabbled with the kubernetes module yet, would that be affected by this change?
I agree with you, this is totally appropriate. We could definitely push the entire configuration to S3 - this is actually extremely straightforward as well. Using our own KMS key as you described, even better.
There is also the fact that the S3 provider pushes the etcd data backups to S3, and they should also be encrypted fully. Also, the TF state stored in S3.
Feel free to create the new bucket in here, alongside the
Regarding using a
In your S3 object resource above, you need to be using those parameters:
added a commit
Feb 13, 2019
I am wondering if you should take this public issue down and discuss it privately first instead. This could technically be an exploitable vulnerability here. Just a suggestion anyways - not sure how serious this is (e.g. can it be read by unprivileged users?) or should be considered, as I am on my phone right now.
Edit: deleted our last few comments, just in case.
@lucab Thanks for the turnaround. Looks accurate indeed - saves us from random container escapes. The only last bit would be that lots of enterprise customers ship their logs to a central server - so it'd depend on authz there & make the correct assumptions regarding content.