-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ FEATURE ] Maximum backlogged episodes #281
Comments
Hi Omg, just curious. What is the reason you don't want to send 1000 torrents all at once? |
When using local torrent clients it causes my router to crash every 2 - 3 hours and when using put.io as I now do using this it limits my allowed torrents to 100 which means anything after that just goes into the void never to be seen again. |
Ok but what is a processed episode? Because it can be processed. But it still would exist in your client right? For seeding? |
That'd be handled by the user, essentially you'd add x amount of torrents and once one of those in the list has PP'd then you can add another one, etc. This would keep the "currently downloading" files at whatever amount you set. Also in my case Put.io seeds 1:1 within a few seconds of the download finishing because of it's upload speed. |
I think it's pretty hard to keep track of this withing medusa. Because of all the possible exceptions. Maybe only support it for torrent clients that are able to give back their inventory? |
allowed number = max configured - total snatched It should be that simple, right? |
Then after the allowed number snatched, medusa will be useless? |
No after the allowed number it'd just stop adding more backlogged episodes. Non-backlogged would be added still. This would also allow the blackhole method to work a lot better as well. |
What's wrong with the blackhole method? Sorry don't use it. |
I've started on this. BACKLOG_QUEUE is a list of tuples in (search_result, timestamp) Backlog is skipping when no slots are available. The tricky part now is to remove items from the list on PP's. As I don't know anything about the PP code. I expect that, that will take most time. I'll also put up a PR. It would already be usable. But then you would depend solely on the BACKLOG_SNATCH_TIMEOUT_HOURS setting, before new snatches are allowed. |
@p0psicles i wouldnt go that way. first we need official support for put.io because @OmgImAlexis is doing a kind o hack to make medusa support ít this is feature needed only for put.io because all clients have a setting "download queue" |
@fernandog no.. this is also helpful for the blackhole method and there's no reason not to add it as it helps with other clients. |
which clients doesn't have "download queue" setting? also in 24 hours the torrent may be not available anymore. so you will have a stalling torrent or a torrent without seeds |
Well fernandog, i'm kind of playing around with it. So maybe I can implement something that solves that PR, using timeouts. And is usable for users, that don't want to send accidentally 1500 downloads at once to they're nzb/torrent client, on reinstall medusa. |
@fernandog not all of them can handle excessive torrents in queue, I know a few clients I used before that'd go over 250MB of RAM for 1,000+ torrents which would mean this would fix that. If it gets added now or in a few days it'd still get downloaded at the same time so that whole point is invalid. |
SAB fails with too many items in queue as well, although they have been improving their limits. |
Another candidate for Throttle |
Added to master feature request list - discussion for feature will continue here even though issue is closed. |
I'd love an option to have only x episodes being backlogged at a time so I could have my torrent client running at all times without having 1,000+ torrents added all at once.
For example I could set the limit to 20 and then only 20 backlogged episodes would be added, after it hits the limit it'd then just wait for those that are known to be backlogged to process and then it could start adding more. While this is going new episodes will still be added though.
The text was updated successfully, but these errors were encountered: