setuputils is newer than distutils, and is supported on python 2.6 and higher like we currently support. This also packages everything into eggs for distribution. Maybe this will reduce the number of people with bugs regarding where s3cmd's modules got installed.
Give us a chance to see it, and maybe continue.
http://pythonhosted.org/setuptools/setuptools.html#new-and-changed-setup-keywords notes that the old keyword 'requires' no longer works, and we need to use 'install_requires' instead.
HTTP 405 "Method Not Allowed" is a permanent error. No sense retrying. Just raise the error and let upper levels do as they wish. http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingRRS.html http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETpolicy.html http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTpolicy.html There is no indication that 405 is returned on a PUT call, so we don't include this test in send_file().
Current --acl-revoke only accepts user canonical ids as grantee specification. 'info' command lists users' display names in ACL entries. Adding support for display names in --acl-revoke makes it easy to revoke ACLs based on info command output. This alleviates problem outlined in #223
Different versions of python append different strings to the pattern created when turning a glob into a regular expression. For example, the pattern: .snapshot/ on python 2.4, glob.fnmatch.translate() would yield '.snapshot/', while on python 2.6 and above, we get u'\.snapshot\\/\\Z(?ms)'. Our test for "is this pattern a directory" was thus failing on python 2.6, testing for 'endswith('/'). Whoops. This patch tests for both types of pattern endings now. This fixes #467 and fixes Fedora Infrastructure's sync of EPEL content into S3 regional mirrors.
No need to spam stderr with something we are properly handling. info level lets the message appear with -v or --debug loglevels.
Instead of using os.system(), we can open the file that we want to put via stdin, and pass that file handle to test_s3cmd(), which gets it to the Popen() call as the stdin argument. This way, any failure in s3cmd we can capture and report like for all other tests.
It's always a good idea to close any extra open file descriptors when starting subprocesses. From the subprocess.Popen() manpage: If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. We call Popen in two places: executing gpg to encrypt or decrypt a file, and in run-tests.py when executing s3cmd during the tests. This patch adds close_fds = True in both cases.
This adds a test to put a large file (> multipart-chunk-size-mb) from stdin.
When doing a multipart upload from stdin, we were reading the next part from stdin into a buffer, and then sending down an offset that was from the start of the stdin, not from the start of the buffer we just read into. Whoops. This patch fixes it, so we send down offset=0 when sending down a buffer, so we start from the beginning of the buffer. This fixes multipart stdin uploads of >1 chunk.
If we have been given a buffer to send, calculcate the checksum on that, regardless of if it's also from stdin. The stdin part doesn't matter, and it's possible we've been given a buffer for some other reason. Just operate on the buffer we're given.
Uploads from stdin broke after the v4 signing code went into place, as we were unconditionally trying to open an already open file handle (stdin), and we were reading from stdin twice (which can't work). Patch makes sure that during upload from stdin, since we have already read the data we care about into a buffer, don't read it again just to calculate the sha256 checksum. Instead, simply run sha256 over the already-read buffer. Trivial spelling mistake in a message fixed too.
We want to pass content-type, but we can't know it from stdin, so be explicit about where we get it, either from the command line or Config().
Commit ecee692 added back in SSL hostname checking when the python libraries supported it. Really, it added it in unconditionally, forgetting that older pythons lacked some of the exported functions and exception classes needed to do that checking correctly. This patch lets older python 2.6 and 2.7 continue to work as before, without host name SSL certificate checking, because those versions don't have the new-enough ssl library to implement the checks. I'm not backporting that into s3cmd either - you get what the python standard library for your system provides.