Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accessing private docker images #4

Closed
asans opened this issue Jan 11, 2015 · 5 comments
Closed

Accessing private docker images #4

asans opened this issue Jan 11, 2015 · 5 comments

Comments

@asans
Copy link

asans commented Jan 11, 2015

I've been looking in the code and looking at ways on how ECS can access private images, but am unable to find any documentation (AWS documentation doesn't contain information on this either) about this.

Based on the description on the ECS product page, it indicates that this can be done. So where can I set the docker authentication for the ECS agent? Or I assume that this needs to be set inside .dockercfg file and somehow placed into the AMI image upon instance launching?

Or should ecs-agent actually support this?

@euank
Copy link
Contributor

euank commented Jan 11, 2015

Unfortunately, Amazon ECS doesn't currently handle Docker auth. You can access private registries in the sense of private to your VPC, but that's not the best solution for everyone.

Because of how Docker handles auth, just having a .dockercfg file is not sufficient because the authconfig is sent as part of the PullImage request to the daemon, and the daemon does not go out of its way to read the auth information from any other source (it's the Docker client which reads it).

The agent absolutely should support this. We've been discussing the right way to do it and want to be sure we get it right both from the security perspective and the user perspective. How would you want this to work?

For anyone interested in looking into this more deeply, here's where it would happen.

@asans
Copy link
Author

asans commented Jan 12, 2015

Not entirely sure what you meant in parts of your explanation, so I'll just try to explain why and how it might work out for us.

  1. We host private repos in the docker hub registry for pre-built images of our apps. This allows all our servers to simply pull those images directly onto the production server without having access to the source repo and the fact that the CI built the image and ran it through tests to ensure that the image will run properly as was previously tested on the CI machine.
  2. My understanding of your explanation is that dockercfg is read by the client, ie. the ecs-agent or docker client, and not by the docker server. This makes perfect sense. So that means agent must be able to accommodate this.
  3. Elastic Beanstalk supports the use of private registries by adding a dockercfg file located on an S3 bucket. EB will retrieve this file onto the server during provisioning and use it to download the file. The solution is sound but may not be good for ECS because EB is all-encompassing and has complete control over the machines and instances it manages while any instance can be turned into a container server by installing the agent.

If we were to take a look at how the agent is installed, we have two options.

  1. agent config can contain the auth info. This dockercfg auth info can then be processed and split out into a dockercfg file or passed on to the docker support library to programmatically access the private registry. If this is the case, then the config info will also need to be able to be set and specified in the userdata when launching the new instance. This makes sense since the ECS Cluster needs to be set in the userdata when launching the instance anyways. The advantage for using this way is that one central dockercfg authentication is used by the agent. This means that all new task definitions will not need to worry about authentication. The security is set by the agent itself. The disadvantage is that if new task definitions require access to a new private registry or needs to update the credential info in the future, it would be more difficult and may either require ssh into the container to change the configuration file and restart the agent OR to shutdown the server and relaunch a new one through some sort of rolling update mechanism. Generally, I think registry changes and security info are rare and done on a less frequent basis, so the disadvantage is not huge.
  2. Allow the task definition to contain the credential info when specifying the image to use. The advantage of doing this is that the credentials travel with the image being used, which is good. It is also easier to update should registry info or credentials change. In this case, a new task definition would be created and the old deleted. New tasks would then be launched. This should make the updates seamless and works well with the dynamic launching/removal of task instances. The disadvantage is that we have to redundantly provide the credentials across all the task definitions, which is a bit of a pain when registering new tasks.

Not quite sure which one is better at this point as we are not able to use ECS without having access to the private registries, which really makes it impossible for us to use at this time. If had to submit userdata anyways to have the server be inside a specific cluster, I might as well specify some sort of auth config info at the same time. This can either be done by giving the agent a S3 key for accessing the config.json file, similar to how Elastic Beanstalk does it, or somehow dynamically created inside the userdata script code. The S3 key is probably a bit more secure considering that we can restrict access to the file via IAM and the instance role when needed. S3 key access is also good because we can have the agent do periodic refresh of the auth config file OR restart the agent to have it retrieve an updated config file upon relaunch. Doing this may resolve the disadvantage in option 1 specified above. Currently running instances continues to run without issue when agents are restarted (at least that's what I assume is the case), so it allows for gradual updates and changes without having to redo all the task definitions when credentials change.

I'm thinking as I write, so do forgive the long blurbs.

@euank
Copy link
Contributor

euank commented Jan 14, 2015

Thanks for the suggestions and information about your use case. The feedback is appreciated and we're working to improve the customer experience during our preview.

@hellvinz
Copy link

hellvinz commented Mar 5, 2015

@euank
Copy link
Contributor

euank commented Mar 5, 2015

The overarching issue is fixed, but the second option @asans mentions has not been addressed and I want to capture that as well.

For clarity, I've opened a new issue. #28, specifically for that option and I'm closing this one. @asans, if I missed any nuances of your suggestion then please add them on that issue.

Thanks,
Euan

@euank euank closed this as completed Mar 5, 2015
danehlim pushed a commit to danehlim/amazon-ecs-agent that referenced this issue Oct 26, 2022
fixed the additional-packages install to do a yum "localinstall" on all
packages in the directory. Localinstall considers all rpm package files
together as if they are being installed from a repo, and will correctly
order the installation of the rpms if some of them have dependencies on
each other.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants