New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
checks if docker is running based on process name #29
Conversation
I understand it doesnt work this way currently, but could you please put the sig path in the map.jinja, so that if there is an alternate location for it in an OS or someones setup, they can update the map.jinja. thanks |
Done. Reused pkg map key so we don't have too keys scattered all over. |
checks if docker is running based on process name
I'm not sure it is the right thing to do here. |
@ticosax sounds good. |
@ticosax, @puneetk: I'm suspicious this is working for some people... At least on ubuntu, docker's init script returns 1:
Without this using the process signature the state breaks:
But I'm not sure why the formula is trying to start docker even though the status shows it's running:
Any ideas? |
@tiadobatima on ubuntu I believe you should use: $ service docker start And not use directly the |
Right @ticosax, @puneetk: I was actually using "service" in all my tests. In this particular case, it doesn't matter which is being used because "service" is just executing the sysv init script anyways. And after a long time tracking this weird behaviour, I think understand why now:
I see the comment in upstart.py about why it's done this way, but I'm not totally convinced salt should use the output instead of the return code to handle status. I think the command should fail it it's meant to do so. Notice that it doesn't behave like that in the other actions like start(), stop(), restart()... As for this formula in particular, until (and if) salt's behaviour changes I believe we have two choices:
What do you guys think? |
I think running salt-master and docker within a docker container is too experimental (If I understand your use case correctly) to be the default behaviour. I'm more in favour of making it opt-in. |
The use case is automated testing @ticosax . Quite handy. |
I personally consider testing salt integration within a container a bad practice since you can't reproduce the same behaviour if you run ubuntu either on bare-metal or in the container. The environments are too diverging to be a trust-able integration test. I will update my fork to current master and give it a try. So let's keep it this way. |
Hi @ticosax.... Just adding some food for thought on the experimental nature of containers, without wanting to get too religious :) Container support in Linux is not really beta quality anymore, though, one could argue Docker is. It's just a different "platform". By nature, containers will never "catch up" to bare metal or even VMs. Similarly, VMs do behave differently than bare metal. If one understands the limitations of containers, it's an amazing tool for fast iterations during testing, or even for some production loads. Considering an IBM server can take up to 10 minutes just to start booting, and starting an EC2 instance up to a few minutes, one could run hundreds of tests with containers on that time (depending on the tests of course). Also, it's surprisingly not that complex to design software and config management deployments to be platform aware, though of course, depending on what we want to do, containers (or even VMs) just won't cut it. We just gotta know what we're doing. Some stuff just can't be done on containers, the same way, but to a lesser extent, some stuff can't be done on VMs. Anyways... Thanks for checking the the PR!! :) |
Thank for sharing your point of view. It was interesting to read. I understand better why you choose to go that way. I can be convinced there is good things to run those tests in a container knowing their limitations. |
No description provided.