Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't make multiple docker images for worker #169

Closed
rtnpro opened this issue Feb 22, 2017 · 3 comments
Closed

Don't make multiple docker images for worker #169

rtnpro opened this issue Feb 22, 2017 · 3 comments
Assignees

Comments

@rtnpro
Copy link
Contributor

rtnpro commented Feb 22, 2017

Issue

Currently, we make multiple docker images for each worker script. This adds latency to the build/provisioning process.

Solution

We can create a single image with all the worker scripts and the run the image with worker specific commands.

@navidshaikh navidshaikh self-assigned this Mar 2, 2017
navidshaikh added a commit to navidshaikh/container-pipeline-service that referenced this issue Mar 2, 2017
navidshaikh added a commit to navidshaikh/container-pipeline-service that referenced this issue Mar 2, 2017
@dharmit
Copy link
Collaborator

dharmit commented Mar 2, 2017

Isn't a container meant to have one service? How much magnitude of latency are we adding? I think the servers we're building our images on have a pretty decent bandwidth.

If creating containers is adding latency, why not just run the workers as systemd services like we used to do earlier?

@navidshaikh
Copy link
Collaborator

@dharmit :

If creating containers is adding latency, why not just run the workers as systemd services like we used to do earlier?

There will be one container image with all the workers script, and there will be 3 containers spawned from same image with different CMD.
If you look at the Dockerfile, there is neither CMD nor ENTRYPOINT given. 3 different workers will be created with different CMD with same image.

This will save build time of those images while doing deployment as well as this approach is efficient.

@dharmit
Copy link
Collaborator

dharmit commented Mar 3, 2017

That still doesn't mention a word about why not use systemd services instead? 😄

Systemd services can be configured to restart upon failure. With the approach of one Docker image having multiple services inside it, we might even then make things complex by adding supervisord or systemd inside the container to run these services.

navidshaikh added a commit to navidshaikh/container-pipeline-service that referenced this issue Mar 7, 2017
navidshaikh added a commit to navidshaikh/container-pipeline-service that referenced this issue Mar 10, 2017
navidshaikh added a commit to navidshaikh/container-pipeline-service that referenced this issue Mar 20, 2017
navidshaikh added a commit to navidshaikh/container-pipeline-service that referenced this issue Mar 27, 2017
navidshaikh added a commit to navidshaikh/container-pipeline-service that referenced this issue Mar 30, 2017
navidshaikh added a commit to navidshaikh/container-pipeline-service that referenced this issue Apr 3, 2017
rtnpro pushed a commit to rtnpro/container-pipeline-service that referenced this issue Apr 6, 2017
rtnpro pushed a commit to navidshaikh/container-pipeline-service that referenced this issue Apr 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants