Skip to content


Subversion checkout URL

You can clone with
Download ZIP

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also compare across forks.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
base fork: dangra/s3cmd
head fork: dangra/s3cmd
Commits on Jun 07, 2011
@jleclanche jleclanche Avoid catching SystemExit in a try block ed69060
@jleclanche jleclanche Add __repr__ for S3Uri class f22df27
@jleclanche jleclanche Add S3.MultiPart module with basic functionality 5c2eb56
@jleclanche jleclanche Make use of S3.MultiPart functionality by default 4dc5e15
@jleclanche jleclanche Rename MultiPartUpload.bucket into MultiPartUpload.s3 d56fdaa
@jleclanche jleclanche Implement multipart threading 6a90998
@jleclanche jleclanche Use the ThreadPool interface to thread multipart uploads and return a…
… proper response
@jleclanche jleclanche Dynamically increase the chunk size depending on the file size b9c33d2
@jleclanche jleclanche Add the --enable-multipart option to s3cmd deea1e7
@jleclanche jleclanche Add multipart_num_threads and multipart_chunk_size to Config 32c846c
@jleclanche jleclanche Properly pass multipart enabling, default enabled on > 100MB abd57a8
Commits on Oct 18, 2011
@mludvig mludvig Ignore CF distros with CustomOrigin in [sync --cf-inval]
Bug reported by Nicholas Cynober where his s3cmd sync --cf-inval
kept crashing on parsing a CloudFront distribution list with both
S3Origin and CustomOrigin distributions.

Let's skip over non-S3Origin distros when translating S3Uri to CFUri.
Commits on Nov 21, 2011
@ksperling ksperling Use python-magic for guessing MIME types if available 3243067
@ksperling ksperling Catch the right exception, doh. 59932f5
@mludvig mludvig Merge pull request #14 from ksperling/master
Guess MIME types using python-magic
Commits on Dec 30, 2011
@mludvig mludvig Merge branch 'master' of ssh:// 777acd9
@mludvig mludvig Allow optional parameters in --mime-type
For example to upload UTF-8 encoded html file use:
	--mime-type="text/html; charset=utf-8"
@mludvig mludvig Merge branch 'master' into adys-multipart
Includes conversion from TAB to 4-SPACE indentation!
@mludvig mludvig Whitespace conversion TAB to 4-SPACE indent 731b7e0
Commits on Jan 02, 2012
@mludvig mludvig Mention --mime-type="xx/yy; param=abc" in NEWS addecb7
@mludvig mludvig Added VIM default settings to,
@mludvig mludvig Support for non-threaded multipart upload
Don't create thread-pool with Config().multipart_num_threads=1.
@mludvig mludvig Cleanup: pass enable_multipart via Config()
Remove all the newly introduced parameters for passing enable_multipart
and keep it in Config() instead.

Also renames --enable-multipart to --disable-multipart and
introduces --multipart-chunk-size=SIZE parameter.
@mludvig mludvig Fixed errors to make it work, finally! 7b09ee8
@mludvig mludvig Properly handle multipart chunk sizes 9dda31d
@mludvig mludvig Renamed multipart_chunk_size to multipart_chunk_size_mb
Simplifies handling, avoids confusion.
@mludvig mludvig Fixed after commit 2320b45 2933252
@mludvig mludvig Merge branch 'master' into adys-threaded-multipart 92ba05a
Commits on Jan 05, 2012
@mludvig mludvig Reworked Multipart upload
- Converted to non-threaded upload again
  (will add threading for all uploads, not only multipart, later on)
- Using S3.send_file() instead of S3.send_request()
- Don't read data in the main loop, only compute offset and chunk size
  and leave it to S3.send_file() to read the data.
- Re-enabled progress indicator.

Still broken:
- "s3cmd sync" doesn't work with multipart uploaded files because
  the ETag no longer contains MD5sum of the file. MAJOR!
- Multipart upload abort is not triggered with all failures.
- s3cmd commands "mplist" and "mpabort" to be added.
- s3cmd should resume failed multipart uploads.
@mludvig mludvig Removed Config.multipart_num_threads
- not needed in this branch
@mludvig mludvig Temporarily disabled MultiPart for 's3cmd sync'
sync depends on ETag == MD5 sum of the remote object
in the bucket listings. Unfortunately for multipart
uploaded objects this is not true. We need to come up
with some other way to store the MD5 sum for sync to
@mludvig mludvig MIME-Type guessing is now on by default 0d477b9
@mludvig mludvig Renamed confusing "id" to "seq" in b78cd50
@mludvig mludvig Reorder metadata handling in S3.object_put()
Now we set the mime-type, reduced redundancy and other
attributes also for multipart upload files.
@mludvig mludvig Try to abort MultiPart upload on all errors 07ed770
@mludvig mludvig Fixed headers passing in Multipart upload f46250a
@mludvig mludvig Merge branch 'multipart-single' 3f44bd9
Commits on Jan 06, 2012
@mludvig mludvig Fixed help text 589be07
@mludvig mludvig Merge branch 'multipart-single' cfcbb44
@mludvig mludvig Improved d9c7251
@mludvig mludvig Released version 1.1.0-beta2
* S3/ Updated to 1.1.0-beta2
* NEWS: Updated.
* s3cmd.1: Regenerated.
Commits on Jan 07, 2012
@mludvig mludvig Import S3.Exceptions.ParameterError
Reported by Andy McGregor
@canadianveggie canadianveggie Fixing bug 3091912 - KeyError when copying multiple keys
When you use 's3cmd cp' to copy multiple keys (without using the recursive flag) you get a Key Error.
s3cmd cp s3://source-bucket/prefix* s3://target-bucket

Logged here:
and here:
@mludvig mludvig Merge pull request #20 from pulseenergy/master
Fixing KeyError when copying multiple keys (SourceForge bug 3091912)
Commits on Jan 09, 2012
@mludvig mludvig Improved compatibility with old python-magic
Sadly there are two "magic" modules for python with
different APIs.  Improving compatibility wrapper to
better handle both.
Commits on Jan 12, 2012
@mludvig mludvig Use magic.MAGIC_MIME instead of MAGIC_MIME_TYPE 1bc3cd0
@mludvig mludvig Improved compatibility with Python 2.4
Apparently in Py2.4 the Exception class doesn't have 'message'
@mludvig mludvig Enable multipart for [sync] - do not check MD5
Multipart-uploaded files don't have a valid MD5 sum in their ETag.
We can detect it and disable MD5 comparison when deciding whether
to sync these files. In such a case only the size (and later on a
timestamp) is compared.
@mludvig mludvig Released version 1.1.0-beta3
* S3/ Updated to 1.1.0-beta3
* s3cmd.1: Regenerated.
Commits on Jan 15, 2012
@aral aral Added S3 static site support for Amazon CloudFront invalidation on sync. 6eacb08
Commits on Jan 17, 2012
@mludvig mludvig Compute speed and elapsed time for Multipart uploads
By the way fixes a crash with:
s3cmd put /xyz/big-file s3://bucket/ > /dev/null
Reported by HanJingYu
Commits on Jan 31, 2012
@jbraeuer jbraeuer Remove recursion detection for symlinks.
Recursion detection on symlinks was too restrictive. It would detect the following as recursion:

    main -> main-1234

This is clearly not a recursion and a common pattern, eg when hosting package repositories.
Python's os.walk also does not do recursion detection. So lets behave like Python stdlib.
Commits on Feb 22, 2012
@interra interra info() reports "Disabled MD5 check for FILE" even if --check-no-md5 u…
…sed. If statement is still true if file fails md5 check.
@mludvig mludvig Merge pull request #26 from interra/patch-1
Don't report "Disabled MD5 check for FILE" when --no-check-md5 used
@mludvig mludvig Merge pull request #23 from jbraeuer/master
Follow symlinks, when requested, drop recursion detection.
@mludvig mludvig Merge pull request #21 from aral/s3-static-site-cloudfront-invalidation
Added S3 static site support for Amazon CloudFront invalidation on sync.
Commits on Feb 29, 2012
@kellymclaughlin kellymclaughlin Handle empty return bodies when processing S3 errors.
Currently error commands that do not return a body cause
s3cmd to output an ugly backtrace. This change checks to
see if the data field of the response is non-empty before
calling `getTreeFromXml` on it. An example of an offending
command is using `s3cmd info` on a nonexistent object.
Commits on Mar 01, 2012
@mludvig mludvig Merge pull request #32 from kellymclaughlin/check-for-empty-error-res…

Handle empty return bodies when processing S3 errors.
Something went wrong with that request. Please try again.