for buckets you don't own. Patch from http://arxiv.org/help/bulk_data_s3
Currently error commands that do not return a body cause s3cmd to output an ugly backtrace. This change checks to see if the data field of the response is non-empty before calling `getTreeFromXml` on it. An example of an offending command is using `s3cmd info` on a nonexistent object.
…sed. If statement is still true if file fails md5 check.
Recursion detection on symlinks was too restrictive. It would detect the following as recursion: dir/ main-1234/ file1 file2 main -> main-1234 This is clearly not a recursion and a common pattern, eg when hosting package repositories. Python's os.walk also does not do recursion detection. So lets behave like Python stdlib.
By the way fixes a crash with: s3cmd put /xyz/big-file s3://bucket/ > /dev/null Reported by HanJingYu
When you use 's3cmd cp' to copy multiple keys (without using the recursive flag) you get a Key Error. s3cmd cp s3://source-bucket/prefix* s3://target-bucket Logged here: http://sourceforge.net/tracker/?func=detail&aid=3091912&group_id=178907&atid=887015 and here: https://bugs.launchpad.net/ubuntu/+source/s3cmd/+bug/523586
- Converted to non-threaded upload again (will add threading for all uploads, not only multipart, later on) - Using S3.send_file() instead of S3.send_request() - Don't read data in the main loop, only compute offset and chunk size and leave it to S3.send_file() to read the data. - Re-enabled progress indicator. Still broken: - "s3cmd sync" doesn't work with multipart uploaded files because the ETag no longer contains MD5sum of the file. MAJOR! - Multipart upload abort is not triggered with all failures. - s3cmd commands "mplist" and "mpabort" to be added. - s3cmd should resume failed multipart uploads.
Simplifies handling, avoids confusion.