Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

env-file support #371

Closed
tbinna opened this issue Nov 4, 2015 · 54 comments
Closed

env-file support #371

tbinna opened this issue Nov 4, 2015 · 54 comments
Assignees
Labels

Comments

@tbinna
Copy link

@tbinna tbinna commented Nov 4, 2015

I open this issue to pick up a point that was made in #127 to support the --env-file parameter. As pointed out in #127 this would be useful e.g. to add some environment variables that contain sensitive information. This way sensitive environment variables could be stored in a private S3 bucket and be pulled in from there either directly or via a mounted volume.

If the --env-file parameter is supported I guess the documentation on Task Definition Parameters could also be improved. Under environment it is mentioned that it is not recommended to put sensitive information in there, however it does not point to a solution on how to do this otherwise.

Extract from issue #127:

[...] Ideally it would allow an s3 endpoint:

"containerDefinitions":[
  {
    "env_file":[
      { "bucket":"my-bucket", "key":"myenvlist" }
    ]
  }
]

Elastic Beanstalk lets you do something similar in the Dockerrun.aws.json for docker private repository configuration:

"Authentication":{
  "Bucket":"my-bucket",
  "Key":"mydockercfg"
},
@jimlester
Copy link

@jimlester jimlester commented Dec 1, 2015

I'm looking for env-file support as well.

@diranged
Copy link

@diranged diranged commented Dec 2, 2015

👍

1 similar comment
@aldarund
Copy link

@aldarund aldarund commented Jan 5, 2016

+1

@esetnik
Copy link

@esetnik esetnik commented Jan 8, 2016

I have the same issue. I'm running a db connected task on ecs and I don't want to embed my db auth in the compose / task. I'm currently using ecs-cli but there's no support for encrypting the environment variables as far as I know.

When I've worked with CI systems that utilize docker (travis for example) they usually provide a mechanism for encrypting environment variables such that they can be embedded in config and decrypted when they are passed into the container. Travis Encryption Keys. I'm wondering if AWS does or could offer a similar feature for encrypting sensitive information destined for the container.

@jtmarmon
Copy link

@jtmarmon jtmarmon commented Jan 20, 2016

+1 would like to be able to use KMS or something similar to encrypt env vars

@oliverwilkie
Copy link

@oliverwilkie oliverwilkie commented Jan 26, 2016

Does anyone have a good workaround for this?

@gavinheavyside
Copy link

@gavinheavyside gavinheavyside commented Jan 27, 2016

I've sometimes used a pattern where the entry point of my container fetches an env file from S3 and sources it before running my actual command. The location of this file can be passed as an env var, and IAM permissions used to control access, e.g (from memory, so it might not work as is):

CMD ["/bin/sh", "-c", "aws s3 cp --region eu-west-1 `echo ${ENV_FILE_PATH}` ./env.sh; . ./env.sh; command-to-run"]

It isn't ideal, but seems to work OK.

@ghaering
Copy link

@ghaering ghaering commented Feb 26, 2016

I plan to use https://github.com/zeroturnaround/configo in my containers. It's a more general solution for loading environment variables from etcd, file, DynamoDB or Vault. Unfortunately, S3 is not supported yet.

I'm not sure env-file is supported in the Docker API, I guess it's a feature strictly of the "docker" commandline tool. Also it helps little in a clustered environment, as then you would still need to put this file on the host.

I'd prefer loading the environment variables from S3 instead, if you were to add this feature to ECS.

@rfink
Copy link

@rfink rfink commented Apr 15, 2016

+1 would be a great feature

@jqmtor
Copy link

@jqmtor jqmtor commented Jun 9, 2016

It would be cool to know what the maintainers think about this issue in terms of relevance/priority. I am really needing this and I might be able to submit a patch.

@santouras
Copy link

@santouras santouras commented Jul 13, 2016

+💯

@enkoder
Copy link

@enkoder enkoder commented Oct 13, 2016

💯 This would be an awesome feature!

@itsjamie
Copy link

@itsjamie itsjamie commented Nov 17, 2016

I think it should be implemented very closely to what @tbinna recommended, although I would include support for using KMS to decrypt the envfile before running the container.

Perhaps

"containerDefinitions":[
  {
    "env_file":[
        { 
           "s3": {          
               "bucket": "my-bucket", 
               "key":"myenvlist" 
           }
           "kmsArn": "<kmsArn>" // Optional
        }
    ]
  }
]

This way if you have sensitive data in the file you encrypt it, upload it to s3. The container can pull it down and use KMS at runtime to decrypt and pass the file directly via --envfile.

Thoughts from the Amazon team? If I were to submit a PR for the agent level changes, might that help see it implemented at the task definition level?

@tilgovi
Copy link

@tilgovi tilgovi commented Nov 18, 2016

@itsjamie maybe rather than a separate kmsArn key, something in the s3 object that can be used to specify whatever value for the SSE, whether that's a kms arn or AES256, etc. This is part of the s3 API so it might make more sense as part of the s3 block.

@cbbarclay
Copy link

@cbbarclay cbbarclay commented Feb 11, 2017

Another option might be to use parameter store.

@jfarrell
Copy link

@jfarrell jfarrell commented Feb 23, 2017

Would be extremely useful to have to env-file and an even bigger win to have that data come from parameter store in a TaskDefinition

@otanner
Copy link

@otanner otanner commented Mar 30, 2017

I created a PoC with CloudFormation that creates a Lambda function to fetch values from the ParameterStore, and CFN then uses the Lambda function as a CustomResource. The same CFN template also creates the TaskDefinition (and the Cluster, Service, ALB etc.). This way it's possible to inject SecureText ParameterStore values into the TaskDefinition ENV (or to any other CloudFormation resource).

This is definitely not the most secure way to implement this as the Lambda needs to be able to read/decrypt values for all the Tasks in the CFN template. I would prefer to use IAM Roles for Tasks and grant each Task access to only it's own parameters, and decrypting them with the help of ecs-agent using the Task's IAM Role when creating the container. Also one downside is that the decrypted values can currently be seen from the TaskDefinition settings in AWS console. Anyway, this implementation needs no changes to the actual container and it's possible to use single key/value pairs instead of the full env-file.

@lrvick
Copy link

@lrvick lrvick commented Apr 9, 2017

All current solutions I have seen involve having a bootstrap container that fetches secrets and writes them to a file. Then you need some way to get the env vars into the target service container. Volumes are one obvious way to do this.

It would then be expected you need tools inside the target container to load the env vars from a volume.

This can work if your service container container bundles tools to source a file. If the target service is a bare bones container with only a binary, such as a go app, then there exists no method to load env vars to it on the fly.

Currently this is a hard blocker for me being able to use ECS at all. Bundling plaintext secrets into task definitions is -not- a solution.

Direct KMS integration would be great but at the very least there needs to be a way to load environment vars from a volume or file on disk. Then a bootstrap container could do the legwork.

@michaelshaffer37
Copy link

@michaelshaffer37 michaelshaffer37 commented May 19, 2017

@cbbarclay I think that the parameter store would be a much better solution as the host EC2 instance running the Clusters wouldn't need to store the file for the env vars. Essentially what @mrburrito is stating on #328 would solve the problem with out exposing them to the host env file system. Much like the link that @myronahn provided but running in ECS agent rather than a special container.
Perhaps if we had something like the following.

"ContainerDefinitions":[
  {
    "Environment":[
        {
           "Name": "PRIVATE_VAR",
           "Value": {
               "Type": "parameter-store",
               "Name": "some.value",
               "Decrypt": true
           }
        }
    ]
  }
]

Then we could just manage the access through the tasks Role.

Just my two cents.

@WhileLoop
Copy link

@WhileLoop WhileLoop commented May 19, 2017

Better integration between EC2 Parameter Store and ECS would be great. Please consider this.

@Adrian-Sherwood
Copy link

@Adrian-Sherwood Adrian-Sherwood commented Jun 9, 2017

Better integration of all docker command line parameter of the run command would be great.

Docker offers many smart and interesting possibilities, ECS shoots them down, thanks Amazon.

@orfin
Copy link

@orfin orfin commented Jul 12, 2017

Hi! as we also ran into the issue of secure passing env variables, we've just released small utility that handles that problem.

It's using AWS Parameter Store for injecting env vars on container startup. Check sample Dockerfile on: https://github.com/Droplr/aws-env

Cheers! :-)

@emanuil-tolev
Copy link

@emanuil-tolev emanuil-tolev commented Jul 18, 2017

I'm currently trying out https://github.com/promptworks/aws-secrets . Associated blog posts:

https://www.promptworks.com/blog/handling-environment-secrets-in-docker-on-the-aws-container-service

https://www.promptworks.com/blog/cli-for-managing-secrets-for-amazon-ec2-container-service-based-applications-with-amazon-kms-and-docker

Looks like it: a/ has a simple interface; b/ has been around for about a year c/ is used in production by its authors and others.

@aregier
Copy link

@aregier aregier commented Nov 2, 2017

Please consider this. It would be very nice to help maintain clean docker images and be able to inject environment variables from the /etc/ecs/ecs.config file by specifying the environment file in the container / task definition.

@dev-head
Copy link

@dev-head dev-head commented Jan 12, 2018

not having a secure way to do this, is a bug. perhaps we should dup this request as a bug to get some aws love

@jtoberon
Copy link

@jtoberon jtoberon commented Jan 26, 2018

There are several related issues, including #1209 and #328.

@felipefrancisco
Copy link

@felipefrancisco felipefrancisco commented Mar 1, 2018

over two years and no response from aws team? wow...

@ajslater
Copy link

@ajslater ajslater commented Mar 2, 2018

@jtoberon
Copy link

@jtoberon jtoberon commented Mar 2, 2018

Please see #1209. We'd love your feedback there to help guide our approach.

@brent-riva
Copy link

@brent-riva brent-riva commented Aug 13, 2019

@petderek @adnxn The use case that isn't addressed is a mass-import of environment variables. I'm currently trying to run a docker image on AWS that takes upwards of 15 variables through the env file for configuration and AWS doesn't let me. I'm surprised it's still not implemented and is something that would lend me to considering GCP or azure

@dorukgezici
Copy link

@dorukgezici dorukgezici commented Apr 12, 2020

@srrengar I think it would be best to support both env-file and cluster-wide env definitions, which could also allow cluster-wide env files.

Idk if we are doing it wrong, but we have around 20 cluster specific and around 10 task definition specific env variables. Some of those task definitions have multiple container definitions that mostly use the same env variables as well. Also we have staging & production clusters. I had to copy around so many things that my eyes went black. There must be a better way.

@srrengar
Copy link
Collaborator

@srrengar srrengar commented May 18, 2020

@shafi-khan
Copy link

@shafi-khan shafi-khan commented May 18, 2020

@srrengar When I try to run a task after adding an env file spec in the task definition, I am getting an error, simply titled 'Reasons : ["ATTRIBUTE"]'.

@srrengar srrengar moved this from We're Working On It to Coming Soon in containers-roadmap May 18, 2020
@yhlee-aws
Copy link

@yhlee-aws yhlee-aws commented May 18, 2020

Hi @shafi-khan,
This error message means the instance is missing required attributes to launch the task. The new envfiles feature requires a new instance attribute "ecs.capability.env-files.s3".
Are you using the latest ECS Optimized AMI? Agent version 1.39.0 onward supports this feature.

@shafi-khan
Copy link

@shafi-khan shafi-khan commented May 18, 2020

@yunhee-l I am currently using v1.37.0. Does updating to the latest AMI all I need to do? Do I need to specify that attribute somewhere?

@shafi-khan
Copy link

@shafi-khan shafi-khan commented May 18, 2020

Nevermind, It works after updating to v1.39.0. Thanks @yunhee-l .

@TusharMehtani
Copy link

@TusharMehtani TusharMehtani commented Jul 1, 2020

Hi folks we just released environment files for containers using the EC2 launch type and Fargate support coming soon

https://aws.amazon.com/about-aws/whats-new/2020/05/amazon-elastic-container-service-supports-environment-files-ec2-launch-type/

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html

Can you please point me to some documentation that shows how to do it in a yaml config?

I've tried this:

EnvironmentFiles:
- Value: "--s3 arn--"
Type: "s3"

(tried the same with camelCase as well)
I keep getting - "Encountered unsupported property EnvironmentFiles"

@tehmaspc
Copy link

@tehmaspc tehmaspc commented Jul 1, 2020

@TusharMehtani - the docs clearly state what format the env file needs to be in. Thus, you need to respect that format.

@TusharMehtani
Copy link

@TusharMehtani TusharMehtani commented Jul 1, 2020

@tehmaspc - I think my question wasn't clear. I'll try to clarify it. My question isn't about the format of the env file, its about the Cloudformation ECS Task Definition template. The documentation declares the following JSON to be a valid template:

"environmentFiles": [
                {
                    "value": "arn:aws:s3:::s3_bucket_name/envfile_object_name.env",
                    "type": "s3"
                }
            ],

Since I usually write my task definitions in YAML, I tried to write the JSON as the following YAML Equivalent:

EnvironmentFiles:
    - Value: "arn:aws:s3:::s3_bucket_name/envfile_object_name.env"
      Type: "s3"

This gives the error - Encountered unsupported property "EnvironmentFiles".
I'm using Container Agent v1.41.0 which is fine as per the docs (req >=v1.39.0). The env file is UTF-8 encoded and follows the proper format as per documentation.
I'm certain this isn't an issue with the env file as the same thing works when I'm creating the task definition using the ECS UI. So, pretty sure the issue is with the template file.
Is the YAML format not supported here for some reason? This would be strange!

Here is a simplified version of my complete task definition YAML file for reference:

Description: >
  This is an example of a long running ECS service that serves a JSON API.

Parameters:
  VPC:
    Description: The VPC that the ECS cluster is deployed to
    Type: AWS::EC2::VPC::Id

  Cluster:
    Description: Please provide the ECS Cluster ID that this service should run on
    Type: String

  DesiredCount:
    Description: How many instances of this task should we run across our cluster?
    Type: Number
    Default: 1

  MyServiceImage:
    Description: URI of the Docker Image of Service you want to deploy
    Type: String
    Default: service/service-v1
    
Resources:
  Service:
    Type: AWS::ECS::Service
    Properties:
      Cluster: !Ref Cluster
      DeploymentConfiguration:
        MaximumPercent: 100
        MinimumHealthyPercent: 0
      DesiredCount: !Ref DesiredCount
      TaskDefinition: !Ref TaskDefinition

  TaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: service-defn-dev
      ContainerDefinitions:
        - Name: service-backend
          Essential: true
          Image: !Ref MyServiceImage
          MemoryReservation: 128
          PortMappings:
            - ContainerPort: 8080
              HostPort: 8080
          MountPoints:
            - ContainerPath: "/container/path/"
              SourceVolume: "mount-point"
          EnvironmentFiles:
            - Value: "arn:aws:s3:::s3_bucket_name/envfile_object_name.env"
              Type: "s3"
          Environment:
            - Name: ENV_VAR_1
              Value: ENV_VAR_VAL_1
            - Name: ENV_VAR_2
              Value: ENV_VAR_VAL_2
          DependsOn:
            - ContainerName: pre-req-service
              Condition: START

      Cpu: "1024"
      Memory: "128"
      Volumes:
        - Host:
            SourcePath: "/host/path/to/mount"
          Name: "mount-point"
@yhlee-aws
Copy link

@yhlee-aws yhlee-aws commented Jul 1, 2020

Hi TusharMehtani,
Cloudformation support for env files is not yet available. We are working on it however, and will be made available soon.

@TusharMehtani
Copy link

@TusharMehtani TusharMehtani commented Jul 1, 2020

Hi TusharMehtani,
Cloudformation support for env files is not yet available. We are working on it however, and will be made available soon.

Thanks for the update @yunhee-l. Looking forward to this, will try to use some alternative for now.

@srrengar
Copy link
Collaborator

@srrengar srrengar commented Aug 14, 2020

@raehalme
Copy link

@raehalme raehalme commented Aug 14, 2020

@srrengar
Copy link
Collaborator

@srrengar srrengar commented Aug 17, 2020

Yes Fargate support is in development

@bordeux
Copy link

@bordeux bordeux commented Aug 28, 2020

Keeping secrets in S3 is bad idea. Better is just use secret manager for it.

@stadskle
Copy link

@stadskle stadskle commented Sep 28, 2020

Yes Fargate support is in development

Do you have a separat github issue for that we can follow @srrengar ?

@srrengar
Copy link
Collaborator

@srrengar srrengar commented Nov 5, 2020

Hi everyone, thank you for your patience. This feature is now available in Fargate as of today.

https://aws.amazon.com/about-aws/whats-new/2020/11/aws-fargate-for-amazon-ecs-launches-features-focused-on-configuration-and-metrics/

@srrengar srrengar closed this Nov 5, 2020
containers-roadmap automation moved this from Coming Soon to Just Shipped Nov 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
containers-roadmap
  
Just Shipped
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
You can’t perform that action at this time.