You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To avoid too much strain on the API and possible rate-limit exhaustions we should limit the number of parallel deletes (introduced in #761). This could be made configurable with a preference too.
Not sure what a good default limit would be. We could calculate something from the default API rate limit. In practice 5 is probably high enough to give some advantage to the user, while not doing too much in parallel.
The text was updated successfully, but these errors were encountered:
Shouldn't this be handled by some client side throttling when hitting some rate limits? For example, read the rate limits value and sleep a bit if we are about to hit the limit?
TL;DR
Limit the parallel ongoing deletes.
Expected behavior
To avoid too much strain on the API and possible rate-limit exhaustions we should limit the number of parallel deletes (introduced in #761). This could be made configurable with a preference too.
Not sure what a good default limit would be. We could calculate something from the default API rate limit. In practice 5 is probably high enough to give some advantage to the user, while not doing too much in parallel.
The text was updated successfully, but these errors were encountered: