Use multipart uploads for large JARs#17
Conversation
|
I'm not waiting on this because it looks like I'm moving away from using aws-java-sdk-bundle anyways, but it may be a nice change for everyone to have going forward. |
|
Thanks for the PR, using multipart uploads definitely makes sense for large JARs. I wonder if we should also update My only concern with this change is how it affects performance for small file uploads. I'm not sure if |
|
Yeah I'm happy to make that change in I do know that TransferManager only uses multipart uploads for files greater than 16 MiB. However, even for files under that limit, it still uses the thread pool for those simple upload requests. That does theoretically add some overhead. |
…ng progress are allowed to proceed
|
this is superceded by #37 |
I recently tried to build something that depends on
com.amazonaws:aws-java-sdk-bundleand had issues with SlimFast. When uploading that dependency JAR to S3, I got this error:The
aws-java-sdk-bundle-1.11.628.jarfile is particularly large, 134 MiB, and I think that's the problem. SlimFast sets a 5 second request timeout and this big PUT request is probably exceeding that. Amazon supports multipart uploads for just this reason. This change uses Amazon's TransferManager to automatically split the upload into multiple HTTP requests. The TransferManager will automatically shut down its thread pool when garbage collected.I've tested this on an internal HubSpot build by setting
dep.plugin.slimfast-plugin.versionand it did successfully uploadaws-java-sdk-bundle-1.11.628.jar.