Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[aws] Avoid writing secrets to user-data #31

Closed
captn3m0 opened this issue Feb 5, 2019 · 9 comments
Closed

[aws] Avoid writing secrets to user-data #31

captn3m0 opened this issue Feb 5, 2019 · 9 comments

Comments

@captn3m0
Copy link
Contributor

captn3m0 commented Feb 5, 2019

data "ignition_file" "eco-key" {
filesystem = "root"
path = "/etc/eco/eco.key"
mode = 0644
content {
content = "${var.eco_key}"
}
}

I was trying to think of ways to get the key pushed to the instance without writing it via user-data (AWS-only). The concern stems from people/tools having access to the DescribeInstanceAttribute API call which will gladly return this back.

The possible approach I was considering was:

  1. Write the secret to a secret version resource.
  2. Add a new flag on the eco binary to instead fetch the secret from the secretmanager instead of reading from disk.

For (2), another option is to add a secret-fetch systemd service which does this, but then figuring out how to get aws cli running on the instance is another problem.

Thoughts?

@captn3m0
Copy link
Contributor Author

captn3m0 commented Feb 5, 2019

Another alternative is to write it to S3, and fetch it from S3 via the ignition_file source argument, which supports S3: https://www.terraform.io/docs/providers/ignition/d/file.html#source-1

@Quentin-M
Copy link
Owner

Quentin-M commented Feb 6, 2019

Hi @captn3m0,

You are absolutely right. It's yet another of my oversight from the time I was POC'ing this project & never thought of coming back to this, unfortunately.

S3 support for Ignition is something I pushed CoreOS hard to implement back in Spring 2017 when I was still working there actually - it was a blocking issue for Tectonic.

It should be quite straightforward to rely on the S3 ACLs / EC2 IAM instance profile as well as the server-side encryption + TLS upload for S3 (except if you want to have your own keys - which is fine too).

@captn3m0
Copy link
Contributor Author

captn3m0 commented Feb 6, 2019

The extra resources created are:

  1. S3 bucket for storing etcd secrets
  2. IAM resources to the secrets
  3. S3 object storing the secrets
  4. KMS Key for encrypting secrets to S3. I want to ensure that this uses a new key created by this module.

If we're okay with pushing stuff to S3, an easier solution would be to push the entire ignition config to S3 instead:

resource "aws_s3_bucket_object" "ignition_master" {
  bucket                 = "bucket_name"
  key                    = "path/${random_id.ignition-key.hex}.json"
  content                = "${data.ignition_config.main.rendered}"
  acl                    = "private"
  server_side_encryption = "AES256"
}

resource "random_id" "ignition-key" {
  byte_length = 8
}

data "ignition_config" "s3" {
  replace {
    source       = "${format("s3://%s/%s", "bucket_name", aws_s3_bucket_object.ignition_master.key)}"
    verification = "sha512-${sha512(data.ignition_config.main.rendered)}"
  }
}


// ignition_config.main goes here

(This has the added advantage of staying within the user-data size limits)

Okay with either approach, whichever you think works better. The question I had was with concerns around the creation of the bucket: Should this be in the module here?

I haven't dabbled with the kubernetes module yet, would that be affected by this change?

@Quentin-M
Copy link
Owner

I agree with you, this is totally appropriate. We could definitely push the entire configuration to S3 - this is actually extremely straightforward as well. Using our own KMS key as you described, even better.

There is also the fact that the S3 provider pushes the etcd data backups to S3, and they should also be encrypted fully. Also, the TF state stored in S3.

Feel free to create the new bucket in here, alongside the backup one. With the KMS configuration, as well as the IAM role - it may make sense to create a new s3.tf file to hold everything together.

Regarding using a random_id to compute the filename, I imagine that you would be doing this to somehow version the configuration? If so, this is not necessary if you enable versioning within the S3 bucket itself. New nodes coming up will always be able to use the latest configurable available in S3, without requiring to modify the ignition_config necessarily. (Also because this random_id resource is only randomized once.) Except if you want to leverage ASG configuration changes to automatically perform rolling updates of the etcd nodes, but I would be a little bit careful with that - as not only does a node need to be up, but etcd needs to have recovered fully (which can take some time).

In your S3 object resource above, you need to be using those parameters:

- server_side_encryption - (Optional) Specifies server-side encryption of the object in S3. Valid values are "AES256" and "aws:kms".
- kms_key_id - (Optional) Specifies the AWS KMS Key ARN to use for object encryption. This value is a fully qualified ARN of the KMS Key. If using aws_kms_key, use the exported arn attribute: kms_key_id = "${aws_kms_key.foo.arn}"

Cheers!

@captn3m0
Copy link
Contributor Author

captn3m0 commented Feb 7, 2019

Got it, the random_id based versioning was from another codebase where we use this at multiple clusters. Won't be needed here 👍

Will file a PR soon.

@Quentin-M
Copy link
Owner

Quentin-M commented Feb 13, 2019

I am wondering if you should take this public issue down and discuss it privately first instead. This could technically be an exploitable vulnerability here. Just a suggestion anyways - not sure how serious this is (e.g. can it be read by unprivileged users?) or should be considered, as I am on my phone right now.

Edit: deleted our last few comments, just in case.

Repository owner deleted a comment from captn3m0 Feb 13, 2019
Repository owner deleted a comment from captn3m0 Feb 13, 2019
Repository owner deleted a comment from captn3m0 Feb 13, 2019
@lucab
Copy link

lucab commented Feb 13, 2019

If I remember correctly, non-privileged users should not be able to read system log via journalctl. On ContainerLinux, both the root and core users are allowed to do that via group permissions.

@Quentin-M
Copy link
Owner

Quentin-M commented Feb 13, 2019

@lucab Thanks for the turnaround. Looks accurate indeed - saves us from random container escapes. The only last bit would be that lots of enterprise customers ship their logs to a central server - so it'd depend on authz there & make the correct assumptions regarding content.

@captn3m0
Copy link
Contributor Author

So, this was fixed on ignition 0.31.0, which needs a minimum of the following CoreOS releases:

  • stable: 2079.3.0
  • beta: 2079.1.0
  • alpha: 2065.0.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants