-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Copying large files fails after 5s timeout #102
Comments
A workaround for this issue is to change the large_object.chunked_obj_len setting. |
Thank you for posting this issue. |
here you go: https://gist.github.com/Licenser/7370772 The chnage I did was dividing the sizes in the app.config by 5 that way each chunk took less them 5s but this is specific to my connection speed, someone with a slower connection might still face the problem. |
Thank you for sharing the configurations. I'll check that on our environment. |
This will only affect you when your connection is slow enough so that a single chunk takes > 5s, in a LAN w/o throtteling you will propably not be reproducable w/o setting the chunk size very high |
I fixed this issue but it's a temporary solution. |
I wonder if it would be possible to read the big chunk in smaller chunks. That would make it possible to not timeout over the whole but over (configurable amount of data) chunks. So the DB could still store 5MB but the chunks read by cowboy are much smaller to handle the timeout. An algorithm would come to mind that reads chunks, starting small, and growing them until the time to read that chunk takes approximately 50% of the timeout, and scales them down when it reaches 75% (just a mad thought). |
It's a good idea. We have been still considering this issue. I'm planning to support as the follows:
We're going to implement the 1st step at first. Thanks a lot. |
Just a update here, the issue has changed slighty but still persists,
This only gets triggered after an full part of the multipart upload is done and does not interrupt the part in the middle, still changing the chunk size in the gateway config will work around this. |
Thank you for your report. We're planning to solve it with the next minor version as the 1st step. |
I know :) just wanted to update the symtoms since they changed. |
I have implemented the 1st step |
I've close this issue. If you and we need the 2nd step you or we will reopen it. |
It seems one of the processes in the chain of storing a file chunk is set up with the default 5s timeout, this leads to the situation where copying large files fails with timeouts to the point where it's impossible to uplaod anything larger
The text was updated successfully, but these errors were encountered: