New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need Host environment variable resolution to pass some information to a container #3

Open
owaaa opened this Issue Jan 6, 2015 · 98 comments

Comments

Projects
None yet
@owaaa

owaaa commented Jan 6, 2015

Before moving to ECS, I need to be able to make sure that I can run consul in that environment. To run consul, I need to pass some Host environment variables as environment variables to the container. I would normally do this as -e HOST=$HOST etc. However, it seems that that if I define a variable with $HOST in a task definition, this is treated as a static string.

Related, I also typically would set docker command line options on the run command for several features I need. I may be able to get around some of this by overriding options in the docker daemon itself, but as far as passing host information I am not aware of any workaround.

@euank

This comment has been minimized.

Show comment
Hide comment
@euank

euank Jan 6, 2015

Contributor

I'd appreciate a few clarifications around what you need here.

When you mention passing in host environment variables, what are you thinking about passing in and how are they getting set on the host? Do you just need e.g. the hostname or ip, or do you want to access arbitrary host environment variables?

Also, if you could pass in environment variables as container overrides when you call start-task, would that help?

As it is, you can access the EC2 Metadata Service from within your containers to get the IP, Instance ID, and other information.

I'd like to understand the use-case a little better and any other possible solutions.

Best,
Euan

Contributor

euank commented Jan 6, 2015

I'd appreciate a few clarifications around what you need here.

When you mention passing in host environment variables, what are you thinking about passing in and how are they getting set on the host? Do you just need e.g. the hostname or ip, or do you want to access arbitrary host environment variables?

Also, if you could pass in environment variables as container overrides when you call start-task, would that help?

As it is, you can access the EC2 Metadata Service from within your containers to get the IP, Instance ID, and other information.

I'd like to understand the use-case a little better and any other possible solutions.

Best,
Euan

@owaaa

This comment has been minimized.

Show comment
Hide comment
@owaaa

owaaa Jan 6, 2015

Thank you for the quick reply!

My end goal is to get several docker "side cars" working with ECS (service discovery, log shipping, etc). Some of these containers expect some host information to be passed in at run time, like the host ip, hostname, etc. I think currently most or all of the items I need are host specific. Currently all our deployments are automated via CloudFormation and the environment variables are set during cloud-init which also launches the containers. You raise a good point about the ec2 metadata service. I could probably fork some of these projects to use it, the only drawback being the containers are more tied to AWS so harder to run offline, etc.

The other option I was just considering is to load these containers which require special config on each of the ECS hosts I create via Cloud Formation via current process. ECS would then only manage my other containers. I am wondering though if that would muck with the CPU and Memory allocation available to tasks?

I have a related question in that I have many containers that need to pass docker run arguments such as --dns -h $HOST --restart=always etc. I think some of these I can set in the docker daemon startup options in /etc/syconfig/ but some can't be set that way. Are you guys thinking about how docker run options like these might be used or specified?

Thanks,

Andrew

owaaa commented Jan 6, 2015

Thank you for the quick reply!

My end goal is to get several docker "side cars" working with ECS (service discovery, log shipping, etc). Some of these containers expect some host information to be passed in at run time, like the host ip, hostname, etc. I think currently most or all of the items I need are host specific. Currently all our deployments are automated via CloudFormation and the environment variables are set during cloud-init which also launches the containers. You raise a good point about the ec2 metadata service. I could probably fork some of these projects to use it, the only drawback being the containers are more tied to AWS so harder to run offline, etc.

The other option I was just considering is to load these containers which require special config on each of the ECS hosts I create via Cloud Formation via current process. ECS would then only manage my other containers. I am wondering though if that would muck with the CPU and Memory allocation available to tasks?

I have a related question in that I have many containers that need to pass docker run arguments such as --dns -h $HOST --restart=always etc. I think some of these I can set in the docker daemon startup options in /etc/syconfig/ but some can't be set that way. Are you guys thinking about how docker run options like these might be used or specified?

Thanks,

Andrew

@euank

This comment has been minimized.

Show comment
Hide comment
@euank

euank Jan 14, 2015

Contributor

Sorry for the slow response on this; passing through host environment variables is a tricky subject.

First of all, you have a few other questions that are easier to address.

  1. CPU / Memory allocation available:

Yes, running things on your instance that ECS is unaware of would cause issues there, but we want to support that use-case. Being able to override the resources ECS can use at launch makes a lot of sense to me. Good question.

  1. Docker run arguments

You're correct that anything that can be passed to the docker daemon (such as the ones you reference) can be edited by using the following user-data:

#cloud-boothook
#!/bin/sh
echo 'OPTIONS="--dns=8.8.8.8"' > /etc/sysconfig/docker

If there are any specific docker run features you want that are not covered by our task definition, we'd be interested in hearing about them.

  1. Host environment variables

To clarify a little further, when you say your environment variables are "set with cloud init" you mean you've put the desired data in /etc/environment or similar, correct?

Would being able to just specify the equivilant of docker's "--env-file" as part of a container definition fit your use case, or would that be insufficient for some reason?

Contributor

euank commented Jan 14, 2015

Sorry for the slow response on this; passing through host environment variables is a tricky subject.

First of all, you have a few other questions that are easier to address.

  1. CPU / Memory allocation available:

Yes, running things on your instance that ECS is unaware of would cause issues there, but we want to support that use-case. Being able to override the resources ECS can use at launch makes a lot of sense to me. Good question.

  1. Docker run arguments

You're correct that anything that can be passed to the docker daemon (such as the ones you reference) can be edited by using the following user-data:

#cloud-boothook
#!/bin/sh
echo 'OPTIONS="--dns=8.8.8.8"' > /etc/sysconfig/docker

If there are any specific docker run features you want that are not covered by our task definition, we'd be interested in hearing about them.

  1. Host environment variables

To clarify a little further, when you say your environment variables are "set with cloud init" you mean you've put the desired data in /etc/environment or similar, correct?

Would being able to just specify the equivilant of docker's "--env-file" as part of a container definition fit your use case, or would that be insufficient for some reason?

@radenui

This comment has been minimized.

Show comment
Hide comment
@radenui

radenui Mar 3, 2015

Hello,

I re-lauch the thread as I'm interested too, and I believe my use-case is close to the one described here.

In the task definition, I can statically set anything related to the environment of the running task (domain, ...), but I also need to run my docker container with arguments depending on the host on which it is launched (host IP address, AWS instance Id, ...).

My docker run command would be something like:

docker run -d -p=80:80 -e HOST=$HOST myrepo/myimage java -Dmy.local.ip=$(curl 169.254.169.254/latest/meta-data/local-ipv4 2> /dev/null) -jar /path/to/my/jarfile.jar

And would correspond to:

docker run -d -p=80:80 -e HOST=myhostname myrepo/myimage java -Dmy.local.ip=10.0.0.5 -jar /path/to/my/jarfile.jar

Is there a way to reproduce this kind of behaviour with a task definition ?

Thanks,

Arthur

radenui commented Mar 3, 2015

Hello,

I re-lauch the thread as I'm interested too, and I believe my use-case is close to the one described here.

In the task definition, I can statically set anything related to the environment of the running task (domain, ...), but I also need to run my docker container with arguments depending on the host on which it is launched (host IP address, AWS instance Id, ...).

My docker run command would be something like:

docker run -d -p=80:80 -e HOST=$HOST myrepo/myimage java -Dmy.local.ip=$(curl 169.254.169.254/latest/meta-data/local-ipv4 2> /dev/null) -jar /path/to/my/jarfile.jar

And would correspond to:

docker run -d -p=80:80 -e HOST=myhostname myrepo/myimage java -Dmy.local.ip=10.0.0.5 -jar /path/to/my/jarfile.jar

Is there a way to reproduce this kind of behaviour with a task definition ?

Thanks,

Arthur

@euank

This comment has been minimized.

Show comment
Hide comment
@euank

euank Mar 3, 2015

Contributor

@radenui Currently there's not a direct way to reproduce that with a task definition.

However, if you change your image to have an entrypoint file where you resolve the above, you can get similar behavior.
For example, such an entrypoint might look like:

#!/bin/sh
export HOST=$(curl -s 169.254.169.254/latest/meta-data/local-hostname)
export LOCAL_IP=$(curl -s 169.254.169.254/latest/meta-data/local-ipv4)
exec "$@"

With the above entrypoint, you can reference those environment variables in your command.

I'm still leaving this issue open since I think the above is probably not the right long-term solution (especially for common things like instance ip).

Best,
Euan

Contributor

euank commented Mar 3, 2015

@radenui Currently there's not a direct way to reproduce that with a task definition.

However, if you change your image to have an entrypoint file where you resolve the above, you can get similar behavior.
For example, such an entrypoint might look like:

#!/bin/sh
export HOST=$(curl -s 169.254.169.254/latest/meta-data/local-hostname)
export LOCAL_IP=$(curl -s 169.254.169.254/latest/meta-data/local-ipv4)
exec "$@"

With the above entrypoint, you can reference those environment variables in your command.

I'm still leaving this issue open since I think the above is probably not the right long-term solution (especially for common things like instance ip).

Best,
Euan

@owaaa

This comment has been minimized.

Show comment
Hide comment
@owaaa

owaaa May 28, 2015

I had to table this temporarily but the end goal is I need consul and registrator working in ECS if I'm going to migrate our services. The consul / registrator docker containers require the $HOST and $HOSTNAME to be passed in to correctly register services and advertise the host machine IP. I was trying to avoid forking the Docker container with a solution like the entrypoint as that ties the container to AWS.

The other original blocker is on my own containers I need to set the DNS to the docker bridge via the --dns flag. I think #2 might work for that if I run the ECS agent on my own tailored machined with that setting.

owaaa commented May 28, 2015

I had to table this temporarily but the end goal is I need consul and registrator working in ECS if I'm going to migrate our services. The consul / registrator docker containers require the $HOST and $HOSTNAME to be passed in to correctly register services and advertise the host machine IP. I was trying to avoid forking the Docker container with a solution like the entrypoint as that ties the container to AWS.

The other original blocker is on my own containers I need to set the DNS to the docker bridge via the --dns flag. I think #2 might work for that if I run the ECS agent on my own tailored machined with that setting.

@johndvorakHR

This comment has been minimized.

Show comment
Hide comment
@johndvorakHR

johndvorakHR commented Jun 4, 2015

+1

@tj

This comment has been minimized.

Show comment
Hide comment
@tj

tj Jun 5, 2015

would love to see this as well, along with most of the docker-run flags, necessary for logging & metrics unless you want to run those via ansible/terraform etc

tj commented Jun 5, 2015

would love to see this as well, along with most of the docker-run flags, necessary for logging & metrics unless you want to run those via ansible/terraform etc

@dgalichet

This comment has been minimized.

Show comment
Hide comment
@dgalichet

dgalichet Jun 11, 2015

+1

having HOST and LOCAL_IP directly defined would be much appreciated. My other option would have to link directly to the container that I wan't to access, but it's not actually possible to link to a container not defined in a Task (https://forums.aws.amazon.com/thread.jspa?threadID=179653&tstart=0)

dgalichet commented Jun 11, 2015

+1

having HOST and LOCAL_IP directly defined would be much appreciated. My other option would have to link directly to the container that I wan't to access, but it's not actually possible to link to a container not defined in a Task (https://forums.aws.amazon.com/thread.jspa?threadID=179653&tstart=0)

@pschorf

This comment has been minimized.

Show comment
Hide comment
@pschorf

pschorf Jun 25, 2015

I'm also interested in this to pass database connection information

pschorf commented Jun 25, 2015

I'm also interested in this to pass database connection information

@juliencruz

This comment has been minimized.

Show comment
Hide comment
@juliencruz

juliencruz Jul 1, 2015

+1

Also interested in this feature as it fills in some gaps for auto-registration of containers in our environment as well.

juliencruz commented Jul 1, 2015

+1

Also interested in this feature as it fills in some gaps for auto-registration of containers in our environment as well.

@rainulf

This comment has been minimized.

Show comment
Hide comment
@rainulf

rainulf Jul 8, 2015

+1 👍

I would like to have this feature as well.

rainulf commented Jul 8, 2015

+1 👍

I would like to have this feature as well.

@pikeas

This comment has been minimized.

Show comment
Hide comment
@pikeas

pikeas Jul 17, 2015

+1, same use case as @owaaa - I want to use consul + registrator, so I need a clean way to pass host-specific config into a container.

pikeas commented Jul 17, 2015

+1, same use case as @owaaa - I want to use consul + registrator, so I need a clean way to pass host-specific config into a container.

@nexus49

This comment has been minimized.

Show comment
Hide comment
@nexus49

nexus49 Jul 17, 2015

+1
Being able to pass in the host or other dynamic values specific to the host is valuable either through a environment variable or possibly through a CMD statement.

Specifically when using third party containers where you don't want to create your own version of.

nexus49 commented Jul 17, 2015

+1
Being able to pass in the host or other dynamic values specific to the host is valuable either through a environment variable or possibly through a CMD statement.

Specifically when using third party containers where you don't want to create your own version of.

@vsudilov

This comment has been minimized.

Show comment
Hide comment
@vsudilov

vsudilov Jul 30, 2015

+1; My use case is also described above

vsudilov commented Jul 30, 2015

+1; My use case is also described above

@ccit-spence

This comment has been minimized.

Show comment
Hide comment
@ccit-spence

ccit-spence Aug 10, 2015

@euank I have been trying your thought on the entrypoint and can't get your bash script to work. Do you have a working example?

ccit-spence commented Aug 10, 2015

@euank I have been trying your thought on the entrypoint and can't get your bash script to work. Do you have a working example?

@euank

This comment has been minimized.

Show comment
Hide comment
@euank

euank Aug 13, 2015

Contributor

@ccit-spence I put together a short example that worked for me which I've posted in this gist.

Contributor

euank commented Aug 13, 2015

@ccit-spence I put together a short example that worked for me which I've posted in this gist.

@iainlbc

This comment has been minimized.

Show comment
Hide comment
@iainlbc

iainlbc Aug 17, 2015

+1, consul & registrator use case as well!

iainlbc commented Aug 17, 2015

+1, consul & registrator use case as well!

@tdensmore

This comment has been minimized.

Show comment
Hide comment
@tdensmore

tdensmore Aug 17, 2015

+1. Need support for docker-run flags.ECS not supporting Docker defaults is puzzling (at best).

Why has this not been implemented yet? On March 3 the proposed workaround to ECS shortcomings was identified as "probably not the right long-term solution (especially for common things like instance ip)". Five months have passed...

tdensmore commented Aug 17, 2015

+1. Need support for docker-run flags.ECS not supporting Docker defaults is puzzling (at best).

Why has this not been implemented yet? On March 3 the proposed workaround to ECS shortcomings was identified as "probably not the right long-term solution (especially for common things like instance ip)". Five months have passed...

@scatterbrain

This comment has been minimized.

Show comment
Hide comment
@scatterbrain

scatterbrain commented Sep 18, 2015

+1

@bs-thomas

This comment has been minimized.

Show comment
Hide comment
@bs-thomas

bs-thomas commented Oct 4, 2015

+1

@Lakret

This comment has been minimized.

Show comment
Hide comment
@Lakret

Lakret commented Oct 29, 2015

+1

@urog

This comment has been minimized.

Show comment
Hide comment
@urog

urog Oct 29, 2015

+1 to being able to execute arbitrary shell to define values for vars

edit: on the host node passing this to the container.

urog commented Oct 29, 2015

+1 to being able to execute arbitrary shell to define values for vars

edit: on the host node passing this to the container.

@filosganga

This comment has been minimized.

Show comment
Hide comment
@filosganga

filosganga Oct 29, 2015

@urog Not all the images have a shell.

filosganga commented Oct 29, 2015

@urog Not all the images have a shell.

@genexp

This comment has been minimized.

Show comment
Hide comment
@genexp

genexp Apr 17, 2017

genexp commented Apr 17, 2017

@todd-alexion

This comment has been minimized.

Show comment
Hide comment
@todd-alexion

todd-alexion Apr 17, 2017

This issue has been open for more than two years. I am glad I stopped using ECS.

todd-alexion commented Apr 17, 2017

This issue has been open for more than two years. I am glad I stopped using ECS.

@jch254

This comment has been minimized.

Show comment
Hide comment
@jch254

jch254 commented Jun 29, 2017

+1!!!

@nikhilo

This comment has been minimized.

Show comment
Hide comment
@nikhilo

nikhilo Jun 30, 2017

I was able to start consul in ECS with not so shabby workaround CONSUL_BIND_INTERFACE=eth0
Here is the full command,

docker run -it --rm --name=consul_ecs --net=host -e CONSUL_BIND_INTERFACE=eth0 \
consul:latest agent -server -ui -client='0.0.0.0' \
-retry-join-ec2-tag-key=Services -retry-join-ec2-tag-value=dev-consul \
-bootstrap-expect=3

nikhilo commented Jun 30, 2017

I was able to start consul in ECS with not so shabby workaround CONSUL_BIND_INTERFACE=eth0
Here is the full command,

docker run -it --rm --name=consul_ecs --net=host -e CONSUL_BIND_INTERFACE=eth0 \
consul:latest agent -server -ui -client='0.0.0.0' \
-retry-join-ec2-tag-key=Services -retry-join-ec2-tag-value=dev-consul \
-bootstrap-expect=3
@sijocherian

This comment has been minimized.

Show comment
Hide comment
@sijocherian

sijocherian Aug 23, 2017

+1 , want to use consul

sijocherian commented Aug 23, 2017

+1 , want to use consul

@cheddesi

This comment has been minimized.

Show comment
Hide comment
@cheddesi

cheddesi Sep 14, 2017

Two years and still there is no neat solution to this problem!
AWS Team, any update on this issue?

Thanks
Siva Chedde

cheddesi commented Sep 14, 2017

Two years and still there is no neat solution to this problem!
AWS Team, any update on this issue?

Thanks
Siva Chedde

@dmerrick

This comment has been minimized.

Show comment
Hide comment
@dmerrick

dmerrick Sep 14, 2017

It doesn't look like they have any intention of adding this

dmerrick commented Sep 14, 2017

It doesn't look like they have any intention of adding this

@danielhanold

This comment has been minimized.

Show comment
Hide comment
@danielhanold

danielhanold Sep 19, 2017

Here's a solution that includes creating a file containing environment variables on the Docker Host, using CloudFormation to automate the creation of this file, and then using an ENTRYPOINT script inspired by the Postgres Docker image: https://www.danielhanold.com/2017/09/set-dynamic-environment-variables-ecs-containers-using-mounted-volumes-docker-entrypoints/

danielhanold commented Sep 19, 2017

Here's a solution that includes creating a file containing environment variables on the Docker Host, using CloudFormation to automate the creation of this file, and then using an ENTRYPOINT script inspired by the Postgres Docker image: https://www.danielhanold.com/2017/09/set-dynamic-environment-variables-ecs-containers-using-mounted-volumes-docker-entrypoints/

@rjosephwright

This comment has been minimized.

Show comment
Hide comment
@rjosephwright

rjosephwright Sep 19, 2017

@danielhanold it isn't necessary unless you need information passed in that is unique to the underlying host, such as its IP address.

Example using CloudFormation (pardon if the syntax is not 100% correct):

Parameters:
  NodeEnv:
    Type: String
    AllowedValues:
      - QA
      - Staging
      - Production

Resources:
  Type: AWS::ECS::TaskDefinition
  Properties:
    ContainerDefinitions:
      - Image: my-image:v1.0
        Environment:
          - Name: NODE_ENV
            Value: !Sub NodeEnv

rjosephwright commented Sep 19, 2017

@danielhanold it isn't necessary unless you need information passed in that is unique to the underlying host, such as its IP address.

Example using CloudFormation (pardon if the syntax is not 100% correct):

Parameters:
  NodeEnv:
    Type: String
    AllowedValues:
      - QA
      - Staging
      - Production

Resources:
  Type: AWS::ECS::TaskDefinition
  Properties:
    ContainerDefinitions:
      - Image: my-image:v1.0
        Environment:
          - Name: NODE_ENV
            Value: !Sub NodeEnv
@danielhanold

This comment has been minimized.

Show comment
Hide comment
@danielhanold

danielhanold Sep 19, 2017

@rjosephwright I'm currently using Travis to build and upload the Docker image to ECR, then generate an updated JSON representation of the task definition, followed by an AWS CLI call to update a specific CloudFormation stack using the new task definition.

I don't know why I never thought of defining the task definition in CloudFormation itself the way you described, which sounds like a much cleaner way to manage & update containers with CFN. Thanks for the great suggestion.

danielhanold commented Sep 19, 2017

@rjosephwright I'm currently using Travis to build and upload the Docker image to ECR, then generate an updated JSON representation of the task definition, followed by an AWS CLI call to update a specific CloudFormation stack using the new task definition.

I don't know why I never thought of defining the task definition in CloudFormation itself the way you described, which sounds like a much cleaner way to manage & update containers with CFN. Thanks for the great suggestion.

@simonslater

This comment has been minimized.

Show comment
Hide comment
@simonslater

simonslater Sep 21, 2017

@danielhanold what I like about your solution is that it allows for a task definition to be used by multiple enviroments.

simonslater commented Sep 21, 2017

@danielhanold what I like about your solution is that it allows for a task definition to be used by multiple enviroments.

@adamrbennett

This comment has been minimized.

Show comment
Hide comment
@adamrbennett

adamrbennett Oct 3, 2017

Here's my use-case:
Create a task definition and run it in dev stack.
Test, iterate, etc.
When ready to promote, reuse the same task definition in prod stack.

It seems this is the intended approach, since task definitions are not cluster-specific. Furthermore, the task definition includes run-time properties, and contains things which may affect run-time behavior. It makes the most sense to promote the run-time definition, rather than just the build-time definition (i.e.: the Docker image) and then re-create, or try to match, the run-time definition with what was tested in dev.

In order to achieve this, we must be able to override the container's environment variables depending on the stack it is deployed to. This would be simple if we could use host environment variables to set values on container environment variables.

EDIT: I suppose the most elegant solution is to actually be able to override container environment variables in the service definition (which is cluster-specific), much like you can when running an ad-hoc task.

adamrbennett commented Oct 3, 2017

Here's my use-case:
Create a task definition and run it in dev stack.
Test, iterate, etc.
When ready to promote, reuse the same task definition in prod stack.

It seems this is the intended approach, since task definitions are not cluster-specific. Furthermore, the task definition includes run-time properties, and contains things which may affect run-time behavior. It makes the most sense to promote the run-time definition, rather than just the build-time definition (i.e.: the Docker image) and then re-create, or try to match, the run-time definition with what was tested in dev.

In order to achieve this, we must be able to override the container's environment variables depending on the stack it is deployed to. This would be simple if we could use host environment variables to set values on container environment variables.

EDIT: I suppose the most elegant solution is to actually be able to override container environment variables in the service definition (which is cluster-specific), much like you can when running an ad-hoc task.

@shailesh2088

This comment has been minimized.

Show comment
Hide comment
@shailesh2088

shailesh2088 Oct 19, 2017

+1
This is possible when you execute docker run on bash. I don't see why it's not possible using task definitions.

shailesh2088 commented Oct 19, 2017

+1
This is possible when you execute docker run on bash. I don't see why it's not possible using task definitions.

@zfletcher

This comment has been minimized.

Show comment
Hide comment
@zfletcher

zfletcher Nov 7, 2017

New, from AWS... https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-ecs-allows-containers-to-directly-access-environmental-metadata/

Now, applications running in containers managed by Amazon ECS can directly query environmental metadata such as the Docker container and Docker image name and id, the container’s networking configuration and port mapping, as well as the ECS Task’s and Container Instance Amazon Resource Name (ARN). This makes applications running on Amazon ECS environment-aware, and enables them to self-register in Service Discovery and Configuration Management solutions.

ECS_CONTAINER_METADATA_FILE=/opt/ecs/metadata/e6e6d129-b432-43aa-bbf5-3dc2498142r0/ecs-container-metadata.json

👍

zfletcher commented Nov 7, 2017

New, from AWS... https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-ecs-allows-containers-to-directly-access-environmental-metadata/

Now, applications running in containers managed by Amazon ECS can directly query environmental metadata such as the Docker container and Docker image name and id, the container’s networking configuration and port mapping, as well as the ECS Task’s and Container Instance Amazon Resource Name (ARN). This makes applications running on Amazon ECS environment-aware, and enables them to self-register in Service Discovery and Configuration Management solutions.

ECS_CONTAINER_METADATA_FILE=/opt/ecs/metadata/e6e6d129-b432-43aa-bbf5-3dc2498142r0/ecs-container-metadata.json

👍

@nmeyerhans

This comment has been minimized.

Show comment
Hide comment
@nmeyerhans

nmeyerhans Nov 7, 2017

Contributor

Thanks @zfletcher for calling that out here. Hopefully this container metadata API addresses most of the use cases captured in this issue. You had previously called out dynamically mapped host ports as one particularly interesting to you. We do support that, but be aware of #1052 for now.

I'd love to resolve this issue at this point, but I don't want to do so if there are still scenarios that aren't addressed by the container metadata feature. This issue has been open for a long time and quite a few people have contributed to it. If you've been waiting for this functionality, please give it a try and let us know if we've addressed your needs. If we haven't, please provide details. (Note that we'd also be happy to have people open new issues; it might help us keep better track of the various requests.)

Contributor

nmeyerhans commented Nov 7, 2017

Thanks @zfletcher for calling that out here. Hopefully this container metadata API addresses most of the use cases captured in this issue. You had previously called out dynamically mapped host ports as one particularly interesting to you. We do support that, but be aware of #1052 for now.

I'd love to resolve this issue at this point, but I don't want to do so if there are still scenarios that aren't addressed by the container metadata feature. This issue has been open for a long time and quite a few people have contributed to it. If you've been waiting for this functionality, please give it a try and let us know if we've addressed your needs. If we haven't, please provide details. (Note that we'd also be happy to have people open new issues; it might help us keep better track of the various requests.)

@danielhanold

This comment has been minimized.

Show comment
Hide comment
@danielhanold

danielhanold Nov 9, 2017

@zfletcher Thanks for adding this update, but @nmeyerhans: I don't think this address or resolves the problem raised in this issue. @adamrbennett describes (what I believe to be) the most common use case that is currently not possible with ECS:

Create a task definition and run it in dev stack.
Test, iterate, etc.
When ready to promote, reuse the same task definition in prod stack.

The issue is pretty close to the actual title of this issue: "Need Host environment variable resolution to pass some information to a container". Here's my use case, which is not solved by this recent addition:

  • CFN Stack A that creates an ECS cluster with 1 ECS instance and sets an environment variable ENV_NAME=DEV on the ECS instance
  • CFN Stack B that creates an ECS cluster with 1 ECS instance and sets an environment variable ENV_NAME=PROD on the ECS instance
  • Task Definition X that is running a container on Stack A, which uses ENV_NAME from the ECS Cluster created by Stack A => ENV_NAME = DEV
  • Once tested successfully, the very same task definition X is then promoted to Stack B and the container uses the ENV_NAME from the ECS Cluster created by Stack B => ENV_NAME = PROD

Other than this use case, it's frustrating that we can't use variables for the container environment variables in the task definition and are limited to the hardcoded values set in a task definition.

danielhanold commented Nov 9, 2017

@zfletcher Thanks for adding this update, but @nmeyerhans: I don't think this address or resolves the problem raised in this issue. @adamrbennett describes (what I believe to be) the most common use case that is currently not possible with ECS:

Create a task definition and run it in dev stack.
Test, iterate, etc.
When ready to promote, reuse the same task definition in prod stack.

The issue is pretty close to the actual title of this issue: "Need Host environment variable resolution to pass some information to a container". Here's my use case, which is not solved by this recent addition:

  • CFN Stack A that creates an ECS cluster with 1 ECS instance and sets an environment variable ENV_NAME=DEV on the ECS instance
  • CFN Stack B that creates an ECS cluster with 1 ECS instance and sets an environment variable ENV_NAME=PROD on the ECS instance
  • Task Definition X that is running a container on Stack A, which uses ENV_NAME from the ECS Cluster created by Stack A => ENV_NAME = DEV
  • Once tested successfully, the very same task definition X is then promoted to Stack B and the container uses the ENV_NAME from the ECS Cluster created by Stack B => ENV_NAME = PROD

Other than this use case, it's frustrating that we can't use variables for the container environment variables in the task definition and are limited to the hardcoded values set in a task definition.

@SilverGhostBS

This comment has been minimized.

Show comment
Hide comment
@SilverGhostBS

SilverGhostBS Nov 10, 2017

@danielhanold Thanks, that's EXACTLY my use case.

What's more frustrating is that this is allow through command line on the vanilla docker, just put -e VAR_NAME and it will passthrough the env var from host to container.

But the task definition schema absolutely insists on us setting a value for the parameter, whereas just allowing to omit it might work.

SilverGhostBS commented Nov 10, 2017

@danielhanold Thanks, that's EXACTLY my use case.

What's more frustrating is that this is allow through command line on the vanilla docker, just put -e VAR_NAME and it will passthrough the env var from host to container.

But the task definition schema absolutely insists on us setting a value for the parameter, whereas just allowing to omit it might work.

@pikeas

This comment has been minimized.

Show comment
Hide comment
@pikeas

pikeas Dec 7, 2017

Agree with @danielhanold!

It doesn't make sense to have separate task definitions for dev/staging/prod. I'd like to define my needed env vars once in the task definition, then populate values in each environment's service definition.

pikeas commented Dec 7, 2017

Agree with @danielhanold!

It doesn't make sense to have separate task definitions for dev/staging/prod. I'd like to define my needed env vars once in the task definition, then populate values in each environment's service definition.

@simonslater

This comment has been minimized.

Show comment
Hide comment
@simonslater

simonslater Dec 7, 2017

simonslater commented Dec 7, 2017

@mceg

This comment has been minimized.

Show comment
Hide comment
@mceg

mceg Dec 15, 2017

The workaround that I am using:

{
    "name": "kafka",
    "image": "confluentinc/cp-kafka:4.0.0",
    "essential": true,
    "memoryReservation": 2048,
    "command": [
      "/bin/bash",
      "-c",
      "export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$(hostname -i):9092; /etc/confluent/docker/run"
    ],
    "environment": [
      {
        "name": "KAFKA_ZOOKEEPER_CONNECT",
        "value": "..."
      }
    ],
...
}

a little modification of entrypoint approach but without the need to manage your own containers, and without running containers from user data scripts, just registering them as services as it should be.

mceg commented Dec 15, 2017

The workaround that I am using:

{
    "name": "kafka",
    "image": "confluentinc/cp-kafka:4.0.0",
    "essential": true,
    "memoryReservation": 2048,
    "command": [
      "/bin/bash",
      "-c",
      "export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$(hostname -i):9092; /etc/confluent/docker/run"
    ],
    "environment": [
      {
        "name": "KAFKA_ZOOKEEPER_CONNECT",
        "value": "..."
      }
    ],
...
}

a little modification of entrypoint approach but without the need to manage your own containers, and without running containers from user data scripts, just registering them as services as it should be.

@akrymets

This comment has been minimized.

Show comment
Hide comment
@akrymets

akrymets Mar 6, 2018

If you're using the official consul image then its entrypoint.sh file allows you to specify CONSUL_BIND_INTERFACE env variable and if it's set the script will determine the ip in it and substitute this ip to the "-bind" parameter of consul startup. If you set this env variable, say, to "eth0" then your consul agent will be automatically configured with the required ipv4 address belonging to eth0 during its startup.
Somebody has already mentioned this solution here.

akrymets commented Mar 6, 2018

If you're using the official consul image then its entrypoint.sh file allows you to specify CONSUL_BIND_INTERFACE env variable and if it's set the script will determine the ip in it and substitute this ip to the "-bind" parameter of consul startup. If you set this env variable, say, to "eth0" then your consul agent will be automatically configured with the required ipv4 address belonging to eth0 during its startup.
Somebody has already mentioned this solution here.

@pb0101

This comment has been minimized.

Show comment
Hide comment
@pb0101

pb0101 Mar 16, 2018

The real problem comes when you are using third part docker images that you can't modify and add another ENTRYPOINT.
Is there any other solution besides modifying ENTRYPOINT

pb0101 commented Mar 16, 2018

The real problem comes when you are using third part docker images that you can't modify and add another ENTRYPOINT.
Is there any other solution besides modifying ENTRYPOINT

@zfletcher

This comment has been minimized.

Show comment
Hide comment
@zfletcher

zfletcher Mar 16, 2018

You can usually do docker run --entrypoint "/bin/bash" ... or similar.

zfletcher commented Mar 16, 2018

You can usually do docker run --entrypoint "/bin/bash" ... or similar.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment