-
Notifications
You must be signed in to change notification settings - Fork 4
Container does not start after reboot #9
Comments
I am just updating to DSM 6.0-7321 Update 3 this morning, so I'll definitely be on the lookout for issues on my end. I had problems on restart when I first upgraded to DSM 6, and it seemed to be due to symlinks being recreated on startup around the time I was trying to use them. In the meantime, would you mind looking in |
Quick update - I updated to Update 3 and things seem to be working properly (although the container didn't start on the first boot because the Docker daemon didn't seem to start soon enough). I did some digging through Google, Synology forums and my NAS log files... can't find anything that appears relevant to the issue you're having. It's worth pointing out that your logs say there's no One other thing that may be worth a shot is reinstalling the Synology Docker package. I don't necessarily think that'll fix anything, but it's along the lines of a reboot as far as being an easy thing to do that might have a useful effect. |
Thanks for the input...and the awesome container :) Took a look at the log file you mentioned and it sure looks like the Docker daemon isn't ready by the time the container start script wants to start the container. Here is the relevant part from the log file that shows the reboot I did this morning:
The issue above is probably not related. It looks really weird what happened when the container was up and running but without any volumes mounted. When I installed the Synology update I just checked the output of the status report from the script and since it said it was running I thought everything was ok. The update I installed was Update 2...I just received the email about Update 3. So I will try that soon and report what happens. But I guess unless there is some change to this start script the container will not start by itself. Or due to timing it might be able to start properly. |
That could be a permissions issue with the wrong user running the script. I notice the same thing when I'm not root and try to run the script(or docker commands). FWIW, my install is working OK. |
After installing Update 3 the container was started...but again without mounting /data and /volume1 so it is not working properly now. This is what is logged:
Doing a "stop" and then "start" does not fix the problem. Btw, "status" shows Crashplan Service Info even though it is no longer running. |
FYI I am root when i'm manually invoking the script. The log snippets I have posted tonight are from when the script is called by the system. |
It looks like the container goes bad...the only way, I know of, to get the container back in working order is to issue a recreate. Then I can see that the volumes are mounted properly. |
And I tried to reinstall the docker package but the warning message in syslog is still there. |
Doing a manual reboot of the Synology from cmdline...when the Synology comes back up the container is not running. Manually starting it and everything is fine again. So i'm seeing two issues here:
|
Ok I think one approach here is to slip a check/wait loop into the script. I'm thinking check to see if the Docker socket is ready ( I'll get something together, probably tonight. This seems like a good thing to have in general. |
I checked my |
Two related changes to address issue #9: - The time it takes for the Docker daemon to be ready appears to be unreliable. Add a configurable wait period to give it some time. - Running Docker commands as a non-root user isn't going to be effective. Test for this immediately - it provides an easy way to eliminate a class of problems and not waste anyone's time.
I added a timeout to solve what I think is the root cause of this issue, and a root check to prevent permissions issues from masquerading as other problems. Give the update a shot when you get a chance, and let me know how it goes. Thanks for reporting this and including logs as part of your troubleshooting. I hope you'll be OK now, but if not I'm sure we'll figure it out together. |
Did a quick test... Sync'd to the latest and then rebooted my DS.
My client won't attach. I manually stopped and started the container from the shell on the NAS. Client still won't attach. I Wonder if on a reboot/restart of the NAS the container should be recreated? |
Thanks for testing. It looked OK on mine, but that means something is still inconsistent. The script should only try to run the image and recreate the container if it can't find an existing container. I'm wondering if the test I used (-e) may be at fault. I'm testing to be sure the socket file exists, but not confirming that it's a socket. I wonder if that's introducing a small window where the I switched the syntax in the I'm going to test it a number of times locally in the meantime, and see if I can find any inconsistent behavior. My setup was working before though, so what I'd really like to see is someone else working :). |
With the latest script from the daemon-timeout branch the container starts after reboot. But it is not working properly. The volumes are not mounted properly. Here are logs from the reboot attempt. I
So it looks like it waits for Docker to be up. But then it can't find the existing container so it tries to create one...but by then the existing container is found. It's a pity that the stock Docker implementation in the Synology is half-baked...i.e. that there is no official support for automatically starting containers upon reboot. |
After reading up a bit on Docker I don't think you need a start script at all. Since we manually create the container outside of DSM we can use the option So as a test I modified the following line: into this: and recreated the container. I also removed the symlink to the start script from When the system was back up the CrashPlan container was running and the volumes were mounted as they should. Here you can see the 3rdparty logfile and the output of
As you can see your script is no longer called and yet the container is up and running :) I have tried rebooting twice and the container came back up successfully both times with this modification (with volumes mounted). So unless there is an edge case i'm missing I think you only need to provide a script that can create the container and show its status. There is no need to have a script installed in The only thing I haven't tested is if this survives a DSM upgrade. That I cannot test until a new update is available :) |
That's great @nilrog :). While troubleshooting this issue it did seem like I was hackily working around Thanks a lot for doing the normal thing and helping us both avoid a bunch of unproductive flailing. |
You're welcome! It feels good to contribute something back and not only use your solution and hope for fixes :) This Docker solution is soo much better than the "native" package for CrashPlan. That one worked great in the beginning but the constant breaking by all the automated upgrades was a big hassle, as you also discovered before me (I was looking at doing this myself when I found your solution). It has been flawless so far...apart from when you have rebooted the NAS. It could be that the restart option didn't work in DSM 5.x? But in DSM 6 is looks like it is working as it should. But please give it a good test run. I will keep this as is and wait for the next DSM upgrade and see how that works. |
Confirmed @nilrog solution. Thanks! And docker was upgraded from DSM5 to DSM6. I think DSM5 was docker 1.6 and DSM 6 is 1.9.1. I actually opened a bug against Synology to upgrade the docker package so maybe I helped? :-) |
Thank you both very much, you guys are pros. I'm glad we're all up and running, but I also like that there's a way forward that should be more reliable for anyone (likely ex-patters folks) who stumble across this. I wouldn't have even known how fragile it was if not for you two. I'm thinking I'll rename the script file so it doesn't look like a startup script, then work in @nilrog's changes and try not to let the documentation lag too much. In the meantime, may your backups be smooth :). |
Was about to try this out when I noticed this issue thread. From what I can gather, the way to go now is to modify the line in |
@ericsvendsen Yes that should do the trick. To really close out this issue, I'll need to do some script/documentation tweaks to clean things up and stop attempting to treat this as a startup script. To be honest my setup has been working without problems for so long it's easy to forget about the script until someone asks a question. So thanks, I think! :) |
I applied the latest DSM upgrade a few days ago and thought that the container worked fine since I am using the latest version of this script. But after 3 days I got the email from Crashplan saying that my client had not been connected for 3 days.
So today when I looked closer I noticed that the Crashplan container had not mounted any of the user specified volumes. So I then recreated the container and when I checked again they were mounted properly.
So as one more step I wanted to reboot the Synology to check if the container started properly. But it did not start at all.
The following is logged in syslog of the Synology:
If I manually execute
/usr/local/etc/rc.d/S99crashplandocker.sh status
it starts properly.Any hints on what goes wrong and where to start looking?
The text was updated successfully, but these errors were encountered: