New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicates ignored by NZBGet are falsely labelled as failed grabs #1721

Open
DillonN opened this Issue Feb 28, 2017 · 5 comments

Comments

Projects
None yet
3 participants
@DillonN

DillonN commented Feb 28, 2017

When downloading an NZB that NZBGet has already seen (and presumably marked as failed), NZBGet will mark it as copy and move it right to history. However Sonarr sees this as a failed download. This has two consequences:

  • Incorrectly flags that indexer as errant, and can have it disabled when it is working fine

  • Can cause downloads to be stuck in the "activity" queue, which prevents researching (not quite sure why at the moment)

I can look into this in and maybe make a pull request if I'm right on the cause of these issues, but it won't be for a week or two

@markus101

This comment has been minimized.

Member

markus101 commented Feb 28, 2017

Incorrectly flags that indexer as errant, and can have it disabled when it is working fine

Disables it where? Sonarr doesn't link failed downloads to indexers and disable them for it. Indexers aren't at fault for failures, that's on the server side and not something Sonarr cares about.

Can cause downloads to be stuck in the "activity" queue, which prevents researching (not quite sure why at the moment)

That's a symptom of the underlying issue. Sonarr does handle the deleteStatus of COPY (https://github.com/Sonarr/Sonarr/blob/develop/src/NzbDrone.Core/Download/Clients/Nzbget/Nzbget.cs#L21), but the issue looks to be when Sonarr queues the NZB to nzbget it gets a failed response, so Sonarr never tracks the download, which is why it mark's it as failed, but not grabbed by Sonarr.

The path to resolution isn't very straight forward, since Sonarr would need some way to start tracking it as well as treating the grab as a failure.

@DillonN

This comment has been minimized.

DillonN commented Feb 28, 2017

The reason I said it causes the indexer to disable is because after around 5 consecutive copy grabs the indexer comes up as disabled in the status page and stops being used. Going into settings and testing it re-enables it, then it's rinse and repeat. I haven't had time to look into it too much, but those are the (admittedly basic) observations I noted specifically with copy grabs

@markus101

This comment has been minimized.

Member

markus101 commented Feb 28, 2017

I tried to reproduce that just now and couldn't, but that shouldn't happen unless the grab from the indexer failed, whereas the NZB failing to append to NZBGet is a failure on the download client.

@Taloth

This comment has been minimized.

Member

Taloth commented Feb 28, 2017

And if that grab failed, it would log it as an error. So you should see it in the logs. "Downloading nzb for episode '...' failed" or "API Grab Limit reached for ..."

@markus101

This comment has been minimized.

Member

markus101 commented May 25, 2017

Trace log:

17-5-24 23:20:47.3|Info|Nzbget|Adding report [Bull.S01E13.HDTV.x264-FLEET] to the queue.
17-5-24 23:20:47.3|Trace|HttpClient|Req: [POST] http://diskstation:6789/jsonrpc: append("Bull.S01E13.HDTV.x264-FLEET.nzb", [blob 54867 bytes], "tv", 0, false, false, "", 0, "all", [...])
17-5-24 23:20:47.3|Trace|ConfigService|Using default config value for 'proxyenabled' defaultValue:'False'
17-5-24 23:20:47.7|Trace|HttpClient|Res: [POST] http://diskstation:6789/jsonrpc: 200.OK (311 ms)
17-5-24 23:20:47.7|Trace|HttpClient|Response content (54 bytes): {
"version" : "1.1",
"id" : "7c384da8",
"result" : 0
}

History in NZBGet:

image

Sonarr's Queue:

image
image

Sonarr is treating it as a failure, but because it didn't grab it "result": 0 response to the add request it won't handle it.

Filtering out items with Copy might be the easiest way to resolve this, but I haven't considered any negative with that approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment