s3-parallel-put Parallel uploads to Amazon AWS S3
s3-parallel-put speeds the uploading of many small keys to Amazon AWS S3 by executing multiple PUTs in parallel.
The program reads your credentials from the environment variables
s3-parallel-put --bucket=BUCKET --prefix=PREFIX SOURCE
Keys are computed by combining
PREFIX with the path of the file, starting
SOURCE. Values are file contents.
There are a few other options:
--dry-run causes the program to print what it would do, but not to upload
any files. It is strongly recommended that you test the program with this
option before transferring any real data.
--limit=N causes the program to upload no more than N files. Combined
--dry-run, this is also useful for testing.
--put=MODE sets the heuristic used for deciding whether to upload a file
or not. Valid modes are:
addset the key's content if the key is not already present.
stupidalways set the key's content.
updateset the key's content if the key is not already present and it's content has changed (as determined by its ETag).
The default heuristic is
update. If you know that the keys are not
already present then
stupid is fastest (it avoids an extra HEAD request
for each key). If you know that some keys are already present and that they
have the correct values, then
add is faster than
update (it avoids
calculating the MD5 sum of the content on the client side).
--content-type=CONTENT-TYPE sets the
--gzip compresses all values and sets the
Content-Encoding header to
--processes=N sets the number of parallel upload processes.
--verbose causes more output to be printed, including progress of individual files.
--quiet causes less output.
--insecure control whether a secure connection is used.
--grant applies a
to all files uploaded.
--header=HEADER:VALUE adds an arbitrary header to the S3 file. This
option can be specified multiple times.
- A walker process generates (filename, key_name) pairs and inserts them in
- Multiple putter processes consume these pairs in parallel, uploading the
files to S3 and sending file-by-file statistics to
- A statter process consumes these file-by-file statistics and generates summary statistics.
- Limited error checking.
Automatically parallelize uploads of large files by splitting into chunks.
Copyright (C) 2011 Tom Payne
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.
vim: set spell spelllang=en textwidth=76: