Unpack and par-check failures on Unraid/Docker #558
Comments
If possible please send me the nzb to nzbget@gmail.com. |
I normally remove these downloads as soon as I notice to not confuse other apps with the clogged up queue. I will keep an eye on it and will send one when I experience it again, although I think it might also be related to how fresh the NZBs are and their propagation on various usenet server. Download the same NZB at a later point might result in a normal behavior. |
I've tried to reproduce the issue with a manually constructed corrupted download. The download completed correctly with failure status. Maybe you have the log-file at least? Do you remember if these were rar- or 7z-archives? |
I suspect it's somehow related to nzbToMedia. I've removed those scripts for now and will report back once I've tested it for a a day or so. As I said earlier, it seems to happen with fresh NZBs (usually in the morning in my case) that fail because they haven't fully propagated yet. |
Downloads with missing articles usually are not unpacked at all if they have par2-files. CRC errors may happen in two cases:
If the issue happens again please save the log-file of the download before deleting download. |
Happened again on a fairly fresh nzb. Emailed both nzb and log. Tried downloading again, it failed correctly. Very odd. Edit: downloaded again with success. |
The log doesn't show a recursive unpack (if with recursive you mean the unpack going on and on). First it tried to directly unpack during downloading. That failed. Then it tried to unpack after downloading and this is where the log ends. Interesting fact: in both unpack attempts there is no output from unrar process as if the process could not start. This is suspicious. Can you please send me the global nzbget log? It contains more info which may be useful. |
Yeah, when I observe the file being unpacked inside the respective folder, I see its file size increase to what looks like its full size, presumably it fails silently and starts from 0 bytes again. Hence why it seems recursive. |
Keep experiencing the issue even with archives that seem to be OK, but not all the time. There's nothing in the log other than it's unpacking and where it's unpacking. When I look inside Can't confirm 100% yet, but restarting the docker seems to solve the problem temporarily until the next time it for some reason gets stuck with some NZB. Any chance it could somehow be related to DirectUnpack? Or perhaps unrar process? |
In the log you have sent me it makes exactly two unpack attempts: one attempt during direct unpack. Unrar reports failure, direct unpack cancels. Later, when everything is downloaded another unpack attempt is made during postprocessing. That attempt also fails with unrar reporting an error. No further attempts are made. That's not a recursive unpack or unpack in the loop but rather a normal behaviour. Do you have downloads for which more unpack attempts are made? If so can you please send me their logs? |
The logs exhibit the very similar behaviour as reported by other user on forum in topic I'm getting numerous PAR: FAILURE errors:
What @woble and ChapeL (forum user) have in common? It's Unraid with docker.
|
6 equally sized and branded SSDs in btrfs RAID10
Difficult to tell, but could be around unRAID 6.5.x or maybe even 6.4.x
btrfs scrub can be done, but it's unlikely to be related to that. Will run a scrub to see if there are any corruptions. Edit: ran scrub, it did not report any errors. I had the issue this morning again (seems to be every morning or early in the day; at least as that's when nzbget is doing most of the work). With a queue being stuck with one NZB not unpacking correctly I restarted nzbget and lo and behold everything unpacked as expected. So could it be that the unrar process somehow hangs or gets stuck in a loop and doesn't communicate properly back to nzbget about it? |
Yes unrar does stuck sometimes (according to the log) but I don't know if that the unrar issue or the system issue (unrar can't read files) You use unrar 5.60 beta although nzbget installer package includes unrar 5.50. It seems the docker maintainer put separate unrar into the package instead of using unrar included with nzbget. For a test change UnrarCmd to To rule out all docker issues I suggest to install nzbget without docker using installer from nzbget home page (follow installation manual linked on download page). When installed using installer all files are installed into one (application) directory, which can be easily deleted later. It will not affect the system in any way. |
I've switched to @linuxserver docker for now to see if it experiences the same issue. I am not sure installing nzbget directly on unraid is as straightforward as expected. Alternatively running it inside a VM could be an option. Perhaps @binhex can be of some help in regards to unrar version in his docker. |
For Arch Linux unrar is not packaged as part of nzbget, its an optional package, however i did bundle unrar with my base image due to a lot of applications requiring it, and its true that the version installed is a beta (arch linux targets bleeding edge versions). So i have included a specific version of unrar to be installed which should force this up to 5.6 stable, @woble not sure if you are up to testing this again, i understand if you have moved over to LSIO and dont want the hassle of moving back, but i would love the feedback to at least know its now fixed (image just built). |
@binhex, sure thing. I still have the docker just not running it. I can switch back to your docker for a few days to test whatever needs testing. Edit: switched back to the latest |
@binhex Also, last time this happened I checked the processes in the docker and unrar was process was running. I guess something makes it not communicate with nzbget or perhaps the process itself stuck with something. Could be unrar issue, unless we can test 5.6 in another docker to rule that out. |
ok keep me updated, if you see it in a stuck condition again then please post here, i can roll this back to 5.5 stable if required, sounds a little bit suspect though tbh, i wouldn't of thought unrar 5.6 stable would be so flaky hmmm. |
I repeat the important part:
That means:
I think it's very unlikely that unrar (even beta) can have such a fatal bug. It would be really best to install nzbget directly and test for a while to rule out possible issue with docker system. Also (even if continue using docker version) try this:
With these settings the full par-check will be performed before unpack attempt and we will see if files were OK before unpack. NOTE: These settings slow down post-processing and are not recommended for normal use. When the issue occurs again please send me the nzb-log and the full log. |
Files appear to be written just fine because if I do a manual unpack using 7z (haven't tried unrar) via command line, the file unpacks just fine.
As I mentioned before, it works the moment I restart the docker; nzbget picks up where it was left, which is unpacking except this time it works as expected. Will try the settings you mentioned and report back as soon as I can.
Will try that as the last thing. Otherwise linuxserver docker seemed to have worked just fine. |
Full par-check is made during the process. If there are enough par2-file the files are repaired during full par-check. Unpacking with 7z after that doesn't mean the files were OK at the first stage. The files must be damaged since the full par-check reported errors in multiple files. There is a chance that nzbget (and unrar too) can't read the files properly (and therefore unrar fails and nzbget full par-check reports errors) but 7z can. Do you run 7z within the same docker container? |
7z is installed directly inside unraid. |
LOG emailed.
Will setup nzbget directly on unraid and run that to see how performs it in the same configuration. |
@woble thanks for doing more digging into the issue, fyi i will be creating a test tagged docker image later on today that will include unrar 5.5.0, if that seems to 'fix' the issue then perhaps we have a incompatibility with interaction between nzbget and unrar 5.6.0, lets find out more later on. @hugbug im sure you're a busy person, but any chance you could test latest stable nzbget with unrar 5.6.0?, preferably running on linux using mono to closely simulate the problem?, it doesn't have to be docker, as i dont believe the issue is related to docker. |
@woble it's there now, add tag named 'test' to the end of the repository name in unraid web ui for the docker container config, if you are unsure how to do this look here Q12:- https://lime-technology.com/forums/topic/44108-support-binhex-general/?tab=comments#comment-433612 |
@woble how you getting on with the test image?, is unpacking more reliable now? |
So I am back on the linuxserver.io docker and so far I have had no issues other than occasional actual bad downloads. @binhex, perhaps the bleeding edge nature of arch somehow conflicts with the rest of the configuration (both hardware and software)? @hugbug, feel free to close the issue as I feel it's not really related to nzbget directly. We can still continue the discussion here if that's alright. |
@binhex, same symptoms as @woble and using your latest docker image. Web UI shows unpack has hung, netdata shows docker continuing to use 40% cpu and in terminal I can see the files written but then reverting to zero length and cycling again. Restart of the docker will complete the unpack fine. Currently on unRAID 6.5.3. I upgraded from 6.5.1 and to latest docker when I first noticed this issue. Happy to help try and debug if useful. Let me know best forum. |
When issue happens again please try this: cancel postprocessing of current nzb using nzbget webui via clicking on nzb item in queue and then using actions menu (button Actions). The processing of nzb should cancel and it goes straight to history without other processing (no par-check etc.). Do not restart docker. After that try to unpack the files in terminal using unrar and make two tests: once using unrar from docker image and then using unrar from nzbget package (near nzbget binary). Post results here. After that please make another test: set option UnrarCmd to |
OK. So I had a "stuck at unpacking" download. I canceled postprocessing from the ui. Without restarting docker went in to console:
As it happens, I have another download still stuck in the unpacking stage so let me know if anything else I can do to help track this down. |
I thought binhex docker image uses nzbget installer like the linuxserver image does but I now I see that isn't the case. Binhex image uses nzbget from Arch Linux distribution. @binhex: I'm not a docker specialist so may be you can tell me. All programs inside docker use the host kernel. When you put a program compiled for Arch Linux, which is known to be a bleeding edge Linux distribution, into docker container which runs on Linux host with much older kernel than Arch, can't this cause troubles? NOTE: nzbget installer from nzbget home page is statically linked and purposely targets a very old Linux kernel (2.6.something). This binary works on most (all?) Linux systems. @PeteBa After that you can delete installation directory created by installer. The installer puts all files into this directory and doesn't change any other files on your system. Therefore after removing of installation directory there will no be any traces of nzbget left. |
So I gave this what I could. I subbed in the nzbget version of unrar into the docker and had the same issues. I then swapped in the stock nzbget executable and didnt see the issue repeated albeit in a limited test period. I've now gone with linuxserver version to keep it simple and over the last day and a half had no issues. |
@hugbug reply to your question:-
So as far as i know there are seldom links between applications and specific kernel versions, yes you are right arch linux does tend to go for latest stable or even in some cases beta releases, however in practice i have seen (to date) no issues with a older (or in some cases newer) kernel with arch linux running inside a docker container (unraid runs latest stable kernel at time of release). But yes i guess if there were packages that target particular kernel versions then theoretically it is possible that this might cause problems, but you have to keep in mind docker only runs what you tell it (no systemd), so the number of processes running in very small, thus the potential for any issue is fairly remote (near zero). im going to take another poke around and see if i can see what the hell is going on, i think for now im fairly confident the issue is not related directly to the version of unrar. |
@binhex will try tomorrow. |
@binhex , I must be doing something wrong. From UnRaid docker tab, changed repo to From console, I can see the app is running. Looking at /config/nzbget.conf shows port 6789 is configured. Test connection from Sonarr is successful. Swapping back to latest tag and webui comes up no problem. [update] Having said all of the above, I have used Sonarr to download 3 series of 10 episodes each without any issues. So the original "stuck at unpacking" seems good so far. [... and some more ...] So it looks like the latest tag uses Will keep you posted on whether this sorts the unpacking problem, |
Fails as before on my end. |
@woble ok thanks for checking, ok im out of ideas then, i guess for now you either roll back to 19.0 or you use LSIO version, maybe i will have more luck with 21.0 whenever thats released. |
How exactly it fails? It seems two different issues were reported:
Which one from these issues do you observe with the new docker which uses nzbget installer? |
@hugbug good challenge to get to the root cause of the problem:
[updated] |
@PeteBa BTW, when you edit a post no notifications are sent and your additions to post go unnoticed. |
@PeteBa @woble i have refreshed the base docker image and re-based nzbget from it, it's still currently using the nzbget installer to install, please can you guys give it a whirl and see if things stabilise. NOTE:- This is currently tagged as 'latest' NOT 'test'. p.s. personal thanks to @hugbug for continuing to help try and debug this, i appreciate it. |
@binhex |
@woble ta |
@binhex Have been running into the same issue as I did before for the past two days. I'll revert back to LSIO docker for now and if I get the time I will try a fresh install yours. |
Closing this with the hope the issue is resolved. Feel free to reopen or add more posts if that's not the case and if I can somehow help. |
Hi all. I came searching for a solution to this issue as it seems to still be around. I'm running unRAID 6.6.6 and the latest version of the binhex docker. It happens to me out of the blue a couple of times a week, I go to have a look and there is a queue of files at the 'pp-queued' stage stuck behind a file that is 'unpacking'. All it takes to clear it is a simple restart and it is all systems go until the next time. All the files complete successfully after that. |
@CaptainMalarkey Consider switching to the linuxserver.io docker. I haven't that issue on that one since I switched. |
@woble After reading through the thread here I have switched to test how the linuxserver.io version goes. I was very happy that my exported settings from the binhex version loaded up and worked perfectly in the ls.io docker, was not looking forward to setting all that up again! Mainly posted to let people know that it doesn't seem to be fixed as of February 2019 if they make their way here looking for information like I did. |
I've been dealing with this issue for a very long time and thought I was going crazy. This reddit post worried me due to a comment about possible failing drives and so I thought this was all behind me when I bought new SSD's around December. And.... Same issue, I finally was able to work around it by adjusting my settings so that 1 stuck download didn't stop up the entire queue but it still spiked my CPU load and would cause all sorts of hangs on my server. I will switch to Linuxserver's docker as that appears to be the issue but I never would have guessed that so THANK YOU ALL for digging until you found a solution. I especially want to thank the NZBGet team, your response to this issue was top notch. You could have easily brushed away the original issue because of all the special cases (binhex, docker, unraid) but instead you all went above and beyond. I just wanted to say thank you, not for finding the issue and saving me from pulling my hair out (I mean, thank you for that too) but for how you all responded and for making the open source world better for being a part of it. You all rock! Edit: If you are like me coming from Binhex's dockers you might run into the same issues I did:
I had to fix 2 things:
Problem 1 is as easy as editing your nzbget.conf file and replacing the |
@binhex I'm seeing this same issue. I'm using your sonarr docker also. If sonarr initiates the download, then it continually unpacks. If I go to nzbget and add the nzb file manually, it completes and updates sonarr. |
So this was never solved I take it? Binhex just stopped supporting and everyone is switching to linuxserver's version instead? I would rather not but I guess I can. Is there a way to manual update unrar in NZBGet so I can just try to fix it myself? |
ive not stopped supporting it, i just dont know how to fix it, simple as that really, if anybody has any ideas im all ears. |
It seems that nzbget recursively unpacks archives that fail because of
CRC Failed
error. Double checked using 7z to confirm the CRC error. I've been having this issue for quite a while now with some archives, prior and in v20. Because of that the pp-queue piles up, regardless of the configured strategy for post-processing.To reproduce you would obviously need an archive that returns a
CRC Failed
error.Expected behavior - download should be marked as bad.
Running on unraid as
binhex-nzbget
docker.The text was updated successfully, but these errors were encountered: