Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep a minimum number of slaves ready at all times #9

Open
far-blue opened this issue Aug 26, 2016 · 5 comments
Open

Keep a minimum number of slaves ready at all times #9

far-blue opened this issue Aug 26, 2016 · 5 comments

Comments

@far-blue
Copy link

I really like the idea of on-demand slaves but I'd like to suggest the ability to keep a minimal number of slaves ready at all times rather that dropping back to none. There are times when we have lots of builds going through but there are also times when we might have only 10 builds in a day. For those quieter days it would be nice to have 1 or 2 slaves available rather than spinning up and then shutting down a slave for effectively each build. Of course, I could achieve this using statically assigned slaves but keeping everything in one config would be nice.

@antweiss
Copy link

antweiss commented Aug 29, 2016

Thanks @far-blue - it's an interesting idea. So as a user you'd like to able to define a constant pool of online cloud-provisioned nodes per each Nomad slave template. Let me ask you - why is slave spin-up an issue? In my experience it takes ~20 seconds to spin up a new docker slave on Nomad. Isn't this acceptable for your use case? Or do you see longer times?

@far-blue
Copy link
Author

We have a few very short jobs such as triggering Job DSL script reruns which happen when we create branches in repos which is quite often as we branch for every story. Also, the proper build jobs have a cache of external components - composer, npm, bower etc. - which (because Nomad can't do state yet) are effectively wiped when a node is destroyed. Keeping a small number of nodes around just helps reduce the number of times we fetch this data from the internet rather than from cache.

@adrianlop
Copy link

a workarround: try increasing slave idle time for this use case.

@far-blue
Copy link
Author

For the moment I have a small number of 'swarm' slaves managed as a service job in Nomad which are always available. I've also increased the on-demand slaved to 30mins idle before cleanup. Would be nice to have management in one place though :)

@zstyblik
Copy link

zstyblik commented Oct 16, 2017

In my experience it takes ~20 seconds to spin up a new docker slave on Nomad

Unfortunately, my experience is somewhat different. And at times, nomad, or something, is taking sweet time to spin up Docker Containers and Jenkins slaves. Originally, I thought this is caused by force pull, but running % docker images; everywhere it doesn't seems so be the problem.

EDIT: actually, I might have been wrong about docker pull.

EDIT2: nope, it doesn't seem to be caused by pulls as I've increased docker.cleanup.image.delay. nomad seems to be just doodling around and I can see slaves come on and then off, but that's it. So yes, there are some issues. However, improvements are coming in 0.7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants