Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Delayed loading of NZBs from URLs #541
Currently duplicate check works on NZBs but not on URLs. In particular when processing RSS feeds nzbget downloads all NZB-files (which survived RSS filtering) and adds them to download queue. At that moment duplicate check kicks in and moves some of NZBs as duplicate reserves to history. In many cases these reserved NZBs are not used later if another NZB from the same title is successfully downloaded. Yet the NZB-file was fetched from indexer and counted towards users limit for NZB downloads on that indexer.
Proposal: it would be good if nzbget could do duplicate check on URLs and not fetch NZBs until they are needed for actual downloading.
Also see forum topic Only download NZB if required - RSS feeds?