's3cmd get' command opens destination files as 'ab' (append+write) before even trying to download. In effect, if the file doesn't exist, s3cmd creates it. This patch resolves an unwanted side-effect empty files being left by s3cmd after an error. (e.g. file not found, no permission.) Should only delete files that s3cmd created. Signed-off-by: Oren Held <email@example.com>
Currently error commands that do not return a body cause s3cmd to output an ugly backtrace. This change checks to see if the data field of the response is non-empty before calling `getTreeFromXml` on it. An example of an offending command is using `s3cmd info` on a nonexistent object.
…sed. If statement is still true if file fails md5 check.
Recursion detection on symlinks was too restrictive. It would detect the following as recursion: dir/ main-1234/ file1 file2 main -> main-1234 This is clearly not a recursion and a common pattern, eg when hosting package repositories. Python's os.walk also does not do recursion detection. So lets behave like Python stdlib.
By the way fixes a crash with: s3cmd put /xyz/big-file s3://bucket/ > /dev/null Reported by HanJingYu
When you use 's3cmd cp' to copy multiple keys (without using the recursive flag) you get a Key Error. s3cmd cp s3://source-bucket/prefix* s3://target-bucket Logged here: http://sourceforge.net/tracker/?func=detail&aid=3091912&group_id=178907&atid=887015 and here: https://bugs.launchpad.net/ubuntu/+source/s3cmd/+bug/523586