Catching md5sum" up with previously uploaded content to seed it for the actual transfer, and then incrementally updating the md5sum as bytes are uploaded. Also adding md5sum rollback support in the event of an retryable exception. In addition to speedying up resumed uploads (especially where the previous uploaded portion is small with respect to the file size), this also simplfies the md5 logic as we use the incremental md5 in all cases now.
This greatly reduces the wall clock time for an upload when uploading large files. Situations in which the full MD5 is still calculated in its entirety - When resuming a partial upload - When a name is not specified for the object, because GCS uses the MD5 as the name Updating one test in ResumableUploadTests to change a byte that has already been uploaded, so a failure will occur. Otherwise the on-the-fly evaluation of the md5 will not cause an error, as it will be computed after mutation has occurred.
… explicit and more accurate with respect to the documentation.
…e size change error.
…ges during resumed upload.
1) incorrect path rewrite when accessing storage via a proxy. 2) incorrent call signature to AWSAuthConnection.build_base_http_request from from resumable upload handler. 3) missing auth_path parameter, which resulted in various split attempts on "NoneType" failures.
…still needs to be integrated into other parts of boto
…just upload ID in PUT