B2 HowTo page says files larger than 200MB are chunked (100MB) and uploaded to B2. It also says large file uploads can be resumed. I tested by starting an upload of a 218.1MB file. When the transfer progress was just past 208MB, I stopped the transfer. After waiting 30-45 seconds, I clicked Resume. After a handful of seconds, the transfer started over from the beginning (0MB). My expectation was that at 208MB completed, at least 1 100MB block would have been completed and so, at a minimum, it should have resumed with at least 100MB already completed. Open the log drawer after the fact and it was empty.
The text was updated successfully, but these errors were encountered:
Not sure if this error in the log is from when I stopped transfer manually or from when I tried to resume the upload, but it does include the filename of my test file:
2017-05-10 13:15:11,864 [- RSLogix 5000 Level 3; Project Development.pdf-1](http-110200) ERROR ch.cyberduck.core.threading.LoggingUncaughtExceptionHandler - Thread http-110200 - RSLogix 5000 Level 3; Project Development.pdf-1 has thrown uncaught exception:Connection pool shut down
java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check(Asserts.java:34)
at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:189)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:257)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:176)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.ServiceUnavailableRetryExec.execute(ServiceUnavailableRetryExec.java:85)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at synapticloop.b2.request.BaseB2Request.execute(BaseB2Request.java:341)
at synapticloop.b2.request.BaseB2Request.executePost(BaseB2Request.java:254)
at synapticloop.b2.request.B2UploadPartRequest.getResponse(B2UploadPartRequest.java:60)
at synapticloop.b2.B2ApiClient.uploadLargeFilePart(B2ApiClient.java:574)
at ch.cyberduck.core.b2.B2WriteFeature$1.call(B2WriteFeature.java:87)
at ch.cyberduck.core.b2.B2WriteFeature$1.call(B2WriteFeature.java:76)
at ch.cyberduck.core.http.AbstractHttpWriteFeature$2.run(AbstractHttpWriteFeature.java:98)
at ch.cyberduck.core.threading.NamedThreadFactory$1.run(NamedThreadFactory.java:53)
at java.lang.Thread.run(Thread.java:953)
B2 HowTo page says files larger than 200MB are chunked (100MB) and uploaded to B2. It also says large file uploads can be resumed. I tested by starting an upload of a 218.1MB file. When the transfer progress was just past 208MB, I stopped the transfer. After waiting 30-45 seconds, I clicked Resume. After a handful of seconds, the transfer started over from the beginning (0MB). My expectation was that at 208MB completed, at least 1 100MB block would have been completed and so, at a minimum, it should have resumed with at least 100MB already completed. Open the log drawer after the fact and it was empty.
The text was updated successfully, but these errors were encountered: