* S3/PkgInfo.py: Updated to 1.1.0-beta2 * NEWS: Updated. * s3cmd.1: Regenerated.
Now we set the mime-type, reduced redundancy and other attributes also for multipart upload files.
sync depends on ETag == MD5 sum of the remote object in the bucket listings. Unfortunately for multipart uploaded objects this is not true. We need to come up with some other way to store the MD5 sum for sync to work.
- not needed in this branch
- Converted to non-threaded upload again (will add threading for all uploads, not only multipart, later on) - Using S3.send_file() instead of S3.send_request() - Don't read data in the main loop, only compute offset and chunk size and leave it to S3.send_file() to read the data. - Re-enabled progress indicator. Still broken: - "s3cmd sync" doesn't work with multipart uploaded files because the ETag no longer contains MD5sum of the file. MAJOR! - Multipart upload abort is not triggered with all failures. - s3cmd commands "mplist" and "mpabort" to be added. - s3cmd should resume failed multipart uploads.
Simplifies handling, avoids confusion.
Remove all the newly introduced parameters for passing enable_multipart and keep it in Config() instead. Also renames --enable-multipart to --disable-multipart and introduces --multipart-chunk-size=SIZE parameter.
Don't create thread-pool with Config().multipart_num_threads=1.
Includes conversion from TAB to 4-SPACE indentation!
For example to upload UTF-8 encoded html file use: --mime-type="text/html; charset=utf-8"
Guess MIME types using python-magic
Bug reported by Nicholas Cynober where his s3cmd sync --cf-inval kept crashing on parsing a CloudFront distribution list with both S3Origin and CustomOrigin distributions. Let's skip over non-S3Origin distros when translating S3Uri to CFUri.