New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Abort job if more than 50% of data is missing after trying x MB #1785
Conversation
Isn't this already implemented as the % required completion value? |
I don't think so, but I realize now that I don't understand how req_completion_rate is supposed to be used. Why would you require more or less than 100%? Either way, I think this can let SAB mark the download as failed much earlier when there are a lot of par2-files. |
sabnzbd/nzbstuff.py
Outdated
@@ -1172,6 +1173,8 @@ def remove_article(self, article: Article, success: bool): | |||
# Check the availability of these first articles | |||
if cfg.fail_hopeless_jobs() and cfg.fast_fail(): | |||
job_can_succeed = self.check_first_article_availability() | |||
if not job_can_succeed: | |||
abort_reason = "https://sabnzbd.org/not-complete (check_first_article_availability)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand you want to specify, but this is not usefull for regular users, let's not modify the text.
No need for abort_reason
.
sabnzbd/nzbstuff.py
Outdated
job_can_succeed, _ = self.check_availability_ratio() | ||
if not success and job_can_succeed and not self.reuse: | ||
# Abort if more than 50% is missing after reaching missing_threshold_mbytes | ||
job_can_succeed = self.check_missing_threshold_mbytes() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should only be called if cfg.fail_hopeless_jobs() == True
, like the other checks.
sabnzbd/nzbstuff.py
Outdated
if self.bytes_missing > self.bytes_downloaded: | ||
missing_threshold = cfg.missing_threshold_mbytes() * MEBI | ||
if missing_threshold and self.bytes_tried > missing_threshold: | ||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add INFO
logging here to notify why the job was aborted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The other tests will only do a debug log. How about either a debug or info log for each test, like this?
logging.debug('Abort job "%s", due to impossibility to complete it (test: check_first_article_availability)', self.final_name)
sabnzbd/interface.py
Outdated
@@ -1352,6 +1352,7 @@ def saveSwitches(self, **kwargs): | |||
"selftest_host", | |||
"rating_host", | |||
"ssdp_broadcast_interval", | |||
"missing_threshold_mbytes", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you move this after the req_completion_rate
?
Both here and in cfg.py
.
I kept the old logging but added abort_reason so that all three checks behave the same way. |
this change looks to download 50M of a nzb and basing if its complete based on a % of that 50M? or if more than 50M is missing altogether? either of these is prone to false failures... |
More than half of the tried data must be available after reaching the configured limit. Yes, it can cause false positives, but then so can the first articles search. |
@puzzledsab I did a small refactor. |
Not sure how useful this is in the end but at least it's a small change with minimal performance impact. |
@puzzledsab So should we still add it? I am also not certain if it is usefull.. |
I was hoping to hear back from nismozcar in the thread. I remember having had this problem myself a year ago, but I no longer see it because I haven't downloaded the kind of old postings that have that problem anymore. I think some files have enough of the first articles left not to trigger that check. In those cases it can take a while before it gives up because some servers take a long time to respond when articles are gone. |
https://forums.sabnzbd.org/viewtopic.php?f=4&t=25183
Sometimes you know the downloads will fail if there are a lot of missing articles early. After having tried
missing_threshold_mbytes
MB it will fail if the number of failed bytes exceeds the number of downloaded bytes.I don't see any perfect solutions for this and it's a bit hackish so I didn't make a proper config. The threshold will have to be a guess, and if the user thinks it's a false positive they can retry. I'm very open for suggestions on how to improve it.