-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to: Not Timeout during a large download transaction #235
Comments
Polly A possible approach that comes to mind for a large/slow-download could be to break the data-receive operation into smaller, chunked receives. You could then apply a timeout to each request for a further chunk - effectively you would be gauging 'whether data is being actively received from the server' by specifying some throughput below which you consider it to be failing. eg: "If I don't receive another 100KB in 10 seconds, treat as a failure". The usual approach to chunking a receive if using You could then read the response stream in chunks of a size you specify, similar to this example. A This isn't a pattern I've had cause to implement in my line of work - please let us know how you get on (if you take this approach - or indeed another), because it would make a great example to add to the wiki! |
Hi @CliffCawley Did the above help at all? |
@reisenberger even if it didn't help Cliff, it was valuable advice for me - thanks for that! I'll try to implement this and will come back to provide feedback afterward. It could take a while, though, since I need to prepare the API for providing discrete chunk data transfer, too. |
@TheFlow0360 Perfect! Once you have something in place for this (or equally @CliffCawley ), we would love to have your example for the wiki! |
Ok, I ended up implementing this nearly the same as described in the SO question you provided, @reisenberger , because I wasn't able to use my common api connection (RestSharp doesn't support async file download - I took a look at their code, and it would be too much pain to fix this).
This didn't change all that much, I just removed the filename acquisition because I didn't need it. Also, the cancellation token doesn't make sense in this constellation, you need to either pass it or handle the token source somewhere else. The main part is the new extension method for
Of course, the details of the policy are something everyone has to decide for themselves. I chose a timeout for the call itself (I took 10 seconds), wrapped by a policy that will retry with an exponential increase of delay (2^attempt), wrapped by another timeout that limits the whole operation (in my case 60 seconds). One problem I noticed when testing this is that the server-side may cancel the stream if the client is waiting too long between the retries - resulting in an IOException. I don't know if there is any other way an IOException may occur. I decided to skip all IOExceptions to avoid retries when the server already cancelled, but this could be bad. In any case, I don't know how to distinguish other IOExceptions. @reisenberger do you have any idea regarding the IOException or any other remarks/suggestions? If not I would add this to the wiki. |
Thanks for all the info! Sorry I didn't reply, I've been busy with a deadline and recovering from a cold so I've been rather out of it :( I've tagged this to come back to in the future. I use Polly with Refit so I'm going to have evaluate how I will solve this one then. |
@TheFlow0360 Many thanks for spending the time on this for the benefit of the community! Re the additional retry around obtaining chunks: Do we know/have you been able to test, that when we as the caller cancel a chunk we are reading, the Background: |
@reisenberger I didn't test this thoroughly. What I did as smoke test was to modify the execution of the policy in |
@TheFlow0360 No worries. I'm going to prioritise other Polly areas over exploring effects of ( @TheFlow0360 I may extract a version without the retry to the wiki; it illustrates at least timeout-per-chunk downloading. Thanks again for the contribution! ) @CliffCawley / future readers: A fuller implementation of chunked reading with timeouts should also probably make use of 206 Partial Content / Range Headers for servers which support it: see eg Rick Strahl's article. Microsoft's .NET API browser suggests this is supported in .NET Standard, from 1.1. Again, if anyone has cause to use this and can work it up into a full example, we'd love to see it! |
Closing due to lack of further activity. Anyone feel free to re-open to continue the discussion, tho. |
It seems that I am having a similar issue, though it is not related to this project can anyone here help me fixing this https://stackoverflow.com/questions/60464596/unexpected-behavior-using-zip-stream-npm-on-google-k8s? |
If I'm receiving a large file on a slow connection, is there a way to make Polly not timeout and retry the connection if data is being actively received from the server?
For example if my timeout is 10 seconds because I want to fail fast, but I'm in the middle of receiving a large transaction (let's say an image asset), is there a way to let Polly know that data is actively being received from the server so that it doesn't simply kill the connection because it's taking longer than 10 seconds?
Simply increasing the Timeout won't work here because if the download is 100mb on a bad 3g I would be required to use an unreasonably large Timeout for the call, which means if there's an actual issue contacting the server I won't know until the Timeout has expired.
The text was updated successfully, but these errors were encountered: