You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When testing the client in Qubes, quite often a file fails to download when I click the download link with the following error log message: BaseError: timeout is too large
I think the problem is that larger files are timing out too soon. This will be less of a problem when we add Range requests and allow segmented downloading, but for now we should probably adjust our default timeout value for file downloads. Or try increasing the timeout value each time the api call is retried.
Also, this shouldn't be causing a BaseError. This should cause a RequestTimeoutError which will pause the queue. This should be a fast fix to the DownloadFileJob, so if the incrementing timeouts during retries takes too long, might make sense to split this out into a new issue.
STR
Attach several files to a message, try files of different sizes (larger files are more likely to cause the timeout error)
Start by downloading the smallest file first and work your way up to the larger files until you see it fail. Check the logs for the error mentioned above.
The text was updated successfully, but these errors were encountered:
Description
When testing the client in Qubes, quite often a file fails to download when I click the download link with the following error log message:
BaseError: timeout is too large
I think the problem is that larger files are timing out too soon. This will be less of a problem when we add Range requests and allow segmented downloading, but for now we should probably adjust our default timeout value for file downloads. Or try increasing the timeout value each time the api call is retried.
Also, this shouldn't be causing a BaseError. This should cause a RequestTimeoutError which will pause the queue. This should be a fast fix to the DownloadFileJob, so if the incrementing timeouts during retries takes too long, might make sense to split this out into a new issue.
STR
The text was updated successfully, but these errors were encountered: