Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Commits on Apr 22, 2014
  1. handle errors during multipart uploads

    authored
    Now that we're sending the Content-MD5, S3 could return a BadDigest
    error.  We should catch and retry that.  There are a few other
    retryable codes, including 503 SlowDown, that we should obey too.
Commits on Apr 20, 2014
  1. Don't double-calculate MD5s on multipart chunks

    authored
    Calculate it at upload time, and record it for later comparison.  This
    eliminates the double-calculation we were doing, which just wastes CPU
    cycles and time.
  2. add Content-MD5 header for each multipart chunk

    authored
    Calculate the MD5 value and include it in the Content-MD5 header for
    each multipart chunk.
    
    This has the unfortunate side-effect of calculating the MD5 for each
    chunk twice: once for the initial upload, and once after the upload
    completes.  That will have to get fixed.
  3. add Content-MD5 header to PUT objects (not multipart)

    authored
    This lets S3 compare after receive what we thought we were sending.
    
    Disable with --no-check-md5.
    
    This does not cover multipart uploads.
  4. handle S3DownloadError better

    authored
    Rather than backtracing, report the error and continue.
Commits on Apr 18, 2014
  1. Clarified GPL version 2 plus help cleanups

    matteobar authored
  2. Clarified GPL version 2 plus text cleanups

    matteobar authored
  3. Clarified GPL Version 2, --help text cleanup

    matteobar authored
    Fixed a few mispells. Fixed --multipart-chunk-size-mb help text to be
    15MB default instead of noneMB.
  4. Install Instructions Updates

    matteobar authored
  5. Install instructions updates

    matteobar authored
Commits on Apr 13, 2014
  1. handle failure of getgrgid_grpname() and getpwuid_username()

    authored
    If these throw a TypeError (not sure how to cause it, but had one
    report of same), we would die.  So catch that too.
Commits on Apr 11, 2014
  1. don't import ParseError unconditionally

    authored
    ParseError won't exist on all systems.  That's ok, where we were
    catching it, we can just catch Exception.
  2. Merge pull request #311 from alertedsnake/bugfix_default_cfgfile_path

    authored
    Bugfix: don't rely on $HOME to be set
  3. fix subcmd_batch_del(), using SortedDict slices

    authored
    subcmd_batch_del() was sending the entire remote_list() as a single
    batch delete operation to S3.  That fails for >1000 objects, though we
    were ignoring the fail.  It also can timeout uploading a huge list
    (one example was deleting 40k objects; a 7MB deletion list XML) and
    trying to churn through it.
    
    The whole looking for a marker bit was poor.  We have the remote_list,
    we just couldn't slice it up.  The previous commit adds the getslice
    operator, so now we can.  This greatly simplifies the delete operation
    as we can iterate over slices of 1000 until it's empty.
  4. add SortedDict.__getslice__()

    authored
    It's nice to think of SortedDicts as sorted lists.  Which means one
    should be sliceable.
Commits on Apr 10, 2014
  1. import getpass for Windows

    authored
  2. @alertedsnake

    Bugfix: make a decent assumption about default platform and pick .s3cfg

    alertedsnake authored
    location using os.path.expanduser, rather than relying on $HOME to be set.
Commits on Apr 9, 2014
  1. Added GNU General Public License disclaimer

    matteobar authored
  2. Removed superfluous text from end of GNU license

    matteobar authored
    "How to Apply These Terms to Your New Programs" not needed in license
  3. Updates to INSTALL file

    matteobar authored
Commits on Apr 8, 2014
  1. README - Added s3cmd description: 'What is s3cmd'

    matteobar authored
  2. README file - updated copyright and text

    matteobar authored
  3. another mime magic fix

    authored
    The None return should be a tuple (None, None) instead.
  4. better MIME magic library handling

    authored
    The different mime magic libraries expect input to their from_file(),
    file(), and id_filename() functions, respectively, to take either a
    filesystem-encoded (generally UTF-8) string, or a unicode filename.
    Different versions of the libraries each expect different inputs for
    filename though.  This is annoying.
    
    Here, we call these functions, first with a UTF-8 encoded string
    filename.  If that fails with a UnicodeDecodeError, we try again
    passing a unicode filename.
    
    Also, delete mime_magic_buffer() everywhere, and the introspection of
    gzip files to see what type of object is inside.  It doesn't matter to
    the S3 web server - it needs to be type application/x-gzip, not
    type=application/tar encoding=gzip (as mimetypes would tell us). We
    stopped using the encoding value here as HTTP Content-Encoding in
    commit 44e3589 anyhow.
Commits on Mar 31, 2014
  1. Merge pull request #310 from hrchu/test/objectExpiration

    authored
    Add object expiration test cases to run-tests
Commits on Mar 30, 2014
  1. @hrchu
Commits on Mar 29, 2014
  1. hardlink fix

    authored
    If we didn't record the hardlink md5 because the file size
    was zero, don't then fail to look it up later.
Commits on Mar 28, 2014
  1. Print a proper error for missing dateutil module

    authored
    Catch the failure to import dateutil where it happens,
    not later in s3cmd's ImportError handler.  As this is a new
    dependency, many people don't have it installed already.
    
    Without this, the s3cmd ImportError handler (invoked because the
     import in S3/Utils.py of dateutil fails), throws another uncaught
    exception when invoking
      s = u' '.join([unicodise(a) for a in sys.argv])
    because unicodeise() came from S3/Utils.py which, as just noted,
     failed to import.
Commits on Mar 26, 2014
Something went wrong with that request. Please try again.