-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New containers created every reboot #1505
Comments
Also experienced multiple containers on reboot, but everything else works great for me |
Experienced multiple containers on |
@RavenXce Indeed, I see the same thing. |
@RavenXce @bobvanderlinden i am unable to recreate this. can you confirm that the running containers remain running for more than a few minutes after boot? can you include a i'd like to see what how dokku has configured nginx. can you include the nginx.conf for your app as well as the output from |
@michaelshobbs No it doesn't go away after a number of minutes (checked after 2 hours). I rebooted again and ran
What is interesting is that I've stopped Here is the listing of plugins:
As stated earlier I've looked in the code going through the code of Dokku it seems the problem is around this statement: It seemed it doesn't actually stop the app, as that statement is the same as |
The current implementation preserves zero-downtime deployment. Calling |
I'm seeing the expected behavior below. I have seen this behavior on hosts with limited free memory.
|
Old containers are stopped after 60 seconds using ps:restart but rebooting the server will start a lot of containers and won't stop them after 60 seconds. Ps I have 8g of ram, only ever using 1g at any time. |
@josegonzalez related to the restart policy perhaps? |
Yeah that's what I'm thinking. On boot, containers should start through docker. Maybe worth disabling the reboot |
I'll give that a try. |
Ok, just gave this a try. I had a bunch of 'zombie' docker processes from a dokku app. I stopped them all ( Surprisingly all stopped processes were there after rebooting (same IDs). I'm not sure whether Dokku misconfigured the docker instances or whether this is a docker problem of itself. |
The processes get started again after reboot due to our |
@josegonzalez Indeed! It seems this is a docker problem: moby/moby#11008 The issue was fixed due to moby/moby#15348, which adds |
That's annoying. Can you see if there is an issue giving it the |
I haven't found anything, so I asked here: moby/moby#15348 (comment) Also, this option is only available in the master branch of docker, not in any of the stable versions. Kind of a bummer. |
That wouldn't be my first preference. Perhaps we should rollback the |
Yeah thats my thinking here. While I like being able to support that functionality, I also like having fewer known bugs. |
The restart on failure could potentially be handled inside the container. Not ideal, but people might be able to use that to get the restart-policy behavior back. That said, |
We could just stop all apps on boot and reboot? |
I don't think that will solve the issue. We currently only store the container id of last known successfully started container. https://github.com/progrium/dokku/blob/07e94de20513c1e274466a09a486dd7884839ee5/dokku#L151 |
@josegonzalez That's also far from preferable for production. Reboots will happen (planned or not). Manual intervention (starting all apps) shouldn't be needed after an unplanned reboot. Docker solves this problem, as it already automatically starts containers upon boot if they were running before the reboot. When Docker fixes the issue, dokku doesn't need to do anything at boot (no However, as docker hasn't released the fix yet, Dokku needs to work around this. The problem at the moment is that containers from older versions of an app (after rebuilds/restarts) that were stopped by dokku are all started upon boot by docker. The way to make sure Docker doesn't start these containers at this time is by removing them. That way old containers (previous versions of an app) will be deleted from disk. Docker will not start them and only the latest containers/versions of apps will be left. At least, that's what I've found to be a way to make things work correctly. I'm not sure whether this has other side-effects. |
Experiencing same problems. |
I still can't reproduce this in my vagrant environment. Any tips? |
@michaelshobbs I have not tried this using Vagrant, just VirtualBox. To reproduce you could do exactly the same as I did:
Also, the necessary fix for Docker is now in 1.9.0-rc1:
|
To be clear, my vagrant setup is using VirtualBox underneath. Is the cycle-remote app available publicly? I'd like to compare apples to apples here just in case there's something going on with the app. Also, what are the resource allocations (disk/cpu/memory) for you vbox setup? |
Ah! I just got this to happen. Ok I see now that if I've done an |
It looks like container 1 started up due to the restart policy and then a new container came up because of @bobvanderlinden given you can recreate this seemingly at will, could you modify
Let us know if that solves the issue. If not I'd like to know the state of those containers. Can you capture that by adding |
Oh, also interesting, just found out about
|
Those containers should be killed after 60 seconds, which is part of our zero-downtime deploys feature. Perhaps we can keep a list of all "killable" containers? We can add/remove containers from this list as they get queued up for deletion and then kill all "killable" containers after a reboot... |
@josegonzalez From what I could tell here, the containers are only stopped. Only if the stopping is unsuccessful they are killed. However, those containers are never actually removed. Docker will still start these stopped containers upon reboot because of the restart policy. |
Odd, I thought it had a |
I´d like to add that we just ran into the same issue. We provision a VM with a couple of services via dokku (with VHOST set) and finally stop the VM to export it for distribution to our customers. When the VM is imported and started with VirtualBox "some" of the service containers are started more then one time and the IPs set in dokku/serviceId/IP.web.1 are not valid anymore. Therefore the services cannot be reached anymore. |
Yeah so we at least should have a As far as the reboot is concerned, perhaps we should stop/kill all app containers first - We can also listen to the shutdown or reboot runlevel and stop all containers that should have been killed immediately. Not sure how that might be implemented, but it's definitely an idea. |
Adding The only downside to using |
This resolves issues where docker would start old containers upon reboot. Related to issue dokku#1505.
PR #1569 should solve the problem. It did for me. |
Few thoughts here
i'd prefer we remove restart policies until they are better fleshed out in docker 1.9. |
That said, whatever fixes the problem for now is fine for me. |
The usefulness of auto restarting an app makes sense to me. A standard example is using a process manager like supervisor to restart a node app when it hits an uncaught exception. The desire to push this responsibility to the container daemon as one method of doing this is also is clear to me. It simplifies the deployment as a whole since you don't need to run a process manager anymore. However, this feature is new to docker, they are still figuring it out and I'm sure they will soon. Side bar: @josegonzalez or @bobvanderlinden have either of you chimed in over there about our use case? My strong opinion is that we don't add complexity to dokku, potentially removing the ability to troubleshoot the entire deployment process, and burn cycles to implement a docker feature that is clearly in flux. @josegonzalez can we remove the auto restart policy on the app containers and recommend a process manager for now, if desired? |
I've been fine with removing auto-restart from the get-go and revisiting the on-failure stuff later. Annoying that implementing that feature ended up breaking other stuff :( I don't think this needs a major release, can be a patch, and we'll just add docs about how to resurrect the feature and why we don't (yet) include it by default. |
@michaelshobbs Sorry for the late reply. I only asked here: moby/moby#15348 (comment) about on-failure in combination with unless-stopped, but got no response yet. I'll create an issue about using the combination. I agree that any extra complexity shouldn't be needed in Dokku. That said, auto-restart upon failures is a very nice feature and it would be a shame if it were to be removed. I also did not think the 'old' containers would stay indefinitely when they are not used anymore. It can cause disk problems. That is why I still like the docker stop+rm idea. Maybe it should be an option, which solves the reboot problems for those who have problems with that right now. Same goes for the restart policy: a global option to turn it on/off could be considered. |
If we remove the broken restart policy then the disk space thing is a non-issue. For now, the prescribed method of auto restarting an app will have to be running a process manager inside the container. Fortunately you can do this easily either with a dokku plugin like dokku-logging-supervisord or using something like supervisor if you're deploying a node app. |
@michaelshobbs Alright. Yes, dokku-logging-supervisord does seem like a good alternative. |
You could also implement this feature as a |
@josegonzalez Aah, that's essentially the same as the previous default. Very good to know! Thanks for all the suggestions, I'm closing this. |
This is happening again after #2290. Adding restart policy on-failure:10 to docker containers by default causes them to restart on boot. When a container is retired, aka stopped after deploying a new version or a ps:restart, it should have its restart policy set to no, possibly in the retire containers loop in dokku_deploy_cmd in dokku/plugins/common/functions Line 671 in c3f39fe
|
@bobvanderlinden @wffurr This is happening for me as well. Multiple old containers running the same things after reboot or restart. |
This is a result of docker losing the preferred state of the application after a reboot, as well as not respecting the associated restart policy. This will be partially fixed by #2403, as we can then rebuild the networking configuration properly on reboot and then restart applications as desired. If you'd like to see movement on fixing the issue, please either follow along with that issue or help contribute to fixing it. Thanks. |
Description of problem:
A new docker container is created every time I reboot the machine for Dokku apps.
Output of the following commands
dokku version
:dokku plugins
:docker version
:docker info
:uname -a
:Environment details (AWS, VirtualBox, physical, etc.):
Running Dokku on a clean VirtualBox machine with a clean Ubuntu Server 14.04.3 LTS install.
How was dokku installed?:
How reproducible:
I just did this a second time to make sure it wasn't some mistake:
dokku@dokku.local:cycle-remote
sudo docker ps
and verify one container is running.sudo reboot
sudo docker ps
again and notice there are 2 containers running.Actual Results:
More reboots results in more running containers!
Expected Results:
The same container that was initially started should be running after rebooting.
Additional info:
I did notice that no web-based installer was started nor was it available on http://dokku.local/. http://dokku.local/ only showed the default nginx page after installing Dokku.
Additionally after pushing my app to
cycle-remote
(orcycleremote
) I couldn't access the app through http://cycle-remote.dokku.local/ or http://cycle-remote.dokku.local/. I noticed NO_VHOST was set to 1, so I useddokku config:unset cycle-remote NO_VHOST
. This still did not bring up the mentioned urls. Only after issueingdokku domains:add cycle-remote cycle-remote.dokku.local
did the url come up. I presume this is unrelated to the issue I'm submitting, but I thought I'd mention this.The text was updated successfully, but these errors were encountered: