Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Commits on Apr 12, 2012
  1. @mludvig

    Merge pull request #40 from res0nat0r/bucket-locations

    mludvig authored
    Add all bucket endpoints to --help
  2. @mludvig

    Merge pull request #46 from smcq/license-file

    mludvig authored
    adding LICENSE file containing GPL v2 text
Commits on Mar 29, 2012
  1. @res0nat0r

    Added all bucket endpoints

    res0nat0r authored
Commits on Mar 1, 2012
  1. @mludvig

    Merge pull request #32 from kellymclaughlin/check-for-empty-error-res…

    mludvig authored
    …ponse-body
    
    Handle empty return bodies when processing S3 errors.
Commits on Feb 29, 2012
  1. @kellymclaughlin

    Handle empty return bodies when processing S3 errors.

    kellymclaughlin authored
    Currently error commands that do not return a body cause
    s3cmd to output an ugly backtrace. This change checks to
    see if the data field of the response is non-empty before
    calling `getTreeFromXml` on it. An example of an offending
    command is using `s3cmd info` on a nonexistent object.
Commits on Feb 22, 2012
  1. @mludvig

    Merge pull request #21 from aral/s3-static-site-cloudfront-invalidation

    mludvig authored
    Added S3 static site support for Amazon CloudFront invalidation on sync.
  2. @mludvig

    Merge pull request #23 from jbraeuer/master

    mludvig authored
    Follow symlinks, when requested, drop recursion detection.
  3. @mludvig

    Merge pull request #26 from interra/patch-1

    mludvig authored
    Don't report "Disabled MD5 check for FILE" when --no-check-md5 used
  4. @interra

    info() reports "Disabled MD5 check for FILE" even if --check-no-md5 u…

    interra authored
    …sed. If statement is still true if file fails md5 check.
Commits on Jan 31, 2012
  1. @jbraeuer

    Remove recursion detection for symlinks.

    jbraeuer authored
    Recursion detection on symlinks was too restrictive. It would detect the following as recursion:
    
    dir/
        main-1234/
                  file1
                  file2
        main -> main-1234
    
    This is clearly not a recursion and a common pattern, eg when hosting package repositories.
    Python's os.walk also does not do recursion detection. So lets behave like Python stdlib.
Commits on Jan 17, 2012
  1. @mludvig

    Compute speed and elapsed time for Multipart uploads

    mludvig authored
    By the way fixes a crash with:
    s3cmd put /xyz/big-file s3://bucket/ > /dev/null
    Reported by HanJingYu
Commits on Jan 15, 2012
  1. @aral
Commits on Jan 12, 2012
  1. @mludvig

    Released version 1.1.0-beta3

    mludvig authored
    * S3/PkgInfo.py: Updated to 1.1.0-beta3
    * s3cmd.1: Regenerated.
  2. @mludvig

    Enable multipart for [sync] - do not check MD5

    mludvig authored
    Multipart-uploaded files don't have a valid MD5 sum in their ETag.
    We can detect it and disable MD5 comparison when deciding whether
    to sync these files. In such a case only the size (and later on a
    timestamp) is compared.
  3. @mludvig

    Improved compatibility with Python 2.4

    mludvig authored
    Apparently in Py2.4 the Exception class doesn't have 'message'
    attribute.
  4. @mludvig
Commits on Jan 9, 2012
  1. @mludvig

    Improved compatibility with old python-magic

    mludvig authored
    Sadly there are two "magic" modules for python with
    different APIs.  Improving compatibility wrapper to
    better handle both.
Commits on Jan 7, 2012
  1. @mludvig

    Merge pull request #20 from pulseenergy/master

    mludvig authored
    Fixing KeyError when copying multiple keys (SourceForge bug 3091912)
  2. @canadianveggie

    Fixing bug 3091912 - KeyError when copying multiple keys

    canadianveggie authored
    When you use 's3cmd cp' to copy multiple keys (without using the recursive flag) you get a Key Error.
    s3cmd cp s3://source-bucket/prefix* s3://target-bucket
    
    Logged here: http://sourceforge.net/tracker/?func=detail&aid=3091912&group_id=178907&atid=887015
    and here: https://bugs.launchpad.net/ubuntu/+source/s3cmd/+bug/523586
  3. @mludvig

    Import S3.Exceptions.ParameterError

    mludvig authored
    Reported by Andy McGregor
Commits on Jan 6, 2012
  1. @mludvig

    Released version 1.1.0-beta2

    mludvig authored
    * S3/PkgInfo.py: Updated to 1.1.0-beta2
    * NEWS: Updated.
    * s3cmd.1: Regenerated.
  2. @mludvig

    Improved format-manpage.pl

    mludvig authored
  3. @mludvig
  4. @mludvig

    Fixed help text

    mludvig authored
Commits on Jan 5, 2012
  1. @mludvig
  2. @mludvig
  3. @mludvig
  4. @mludvig

    Reorder metadata handling in S3.object_put()

    mludvig authored
    Now we set the mime-type, reduced redundancy and other
    attributes also for multipart upload files.
  5. @mludvig
  6. @mludvig
  7. @mludvig

    Temporarily disabled MultiPart for 's3cmd sync'

    mludvig authored
    sync depends on ETag == MD5 sum of the remote object
    in the bucket listings. Unfortunately for multipart
    uploaded objects this is not true. We need to come up
    with some other way to store the MD5 sum for sync to
    work.
  8. @mludvig

    Removed Config.multipart_num_threads

    mludvig authored
    - not needed in this branch
  9. @mludvig

    Reworked Multipart upload

    mludvig authored
    - Converted to non-threaded upload again
      (will add threading for all uploads, not only multipart, later on)
    - Using S3.send_file() instead of S3.send_request()
    - Don't read data in the main loop, only compute offset and chunk size
      and leave it to S3.send_file() to read the data.
    - Re-enabled progress indicator.
    
    Still broken:
    - "s3cmd sync" doesn't work with multipart uploaded files because
      the ETag no longer contains MD5sum of the file. MAJOR!
    - Multipart upload abort is not triggered with all failures.
    - s3cmd commands "mplist" and "mpabort" to be added.
    - s3cmd should resume failed multipart uploads.
Commits on Jan 2, 2012
  1. @mludvig
Something went wrong with that request. Please try again.