Skip to content
This repository
Fetching contributors…

Octocat-spinner-32-eaf2f5

Cannot retrieve contributors at this time

file 256 lines (191 sloc) 8.836 kb
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255
S3cmd tool for Amazon Simple Storage Service (S3)
=================================================

Author:
    Michal Ludvig <michal@logix.cz>

S3tools / S3cmd project homepage:
    http://s3tools.sourceforge.net

S3tools / S3cmd mailing list:
    s3tools-general@lists.sourceforge.net

Amazon S3 homepage:
    http://aws.amazon.com/s3

!!!
!!! Please consult INSTALL file for installation instructions!
!!!

What is Amazon S3
-----------------
Amazon S3 provides a managed internet-accessible storage
service where anyone can store any amount of data and
retrieve it later again. Maximum amount of data in one
"object" is 5GB, maximum number of objects is not limited.

S3 is a paid service operated by the well known Amazon.com
internet book shop. Before storing anything into S3 you
must sign up for an "AWS" account (where AWS = Amazon Web
Services) to obtain a pair of identifiers: Access Key and
Secret Key. You will need to give these keys to S3cmd.
Think of them as if they were a username and password for
your S3 account.

Pricing explained
-----------------
At the time of this writing the costs of using S3 are (in USD):

$0.15 per GB per month of storage space used

plus

$0.10 per GB - all data uploaded

plus

$0.18 per GB - first 10 TB / month data downloaded
$0.16 per GB - next 40 TB / month data downloaded
$0.13 per GB - data downloaded / month over 50 TB

plus

$0.01 per 1,000 PUT or LIST requests
$0.01 per 10,000 GET and all other requests

If for instance on 1st of January you upload 2GB of
photos in JPEG from your holiday in New Zealand, at the
end of January you will be charged $0.30 for using 2GB of
storage space for a month, $0.20 for uploading 2GB
of data, and a few cents for requests.
That comes to slightly over $0.50 for a complete backup
of your precious holiday pictures.

In February you don't touch it. Your data are still on S3
servers so you pay $0.30 for those two gigabytes, but not
a single cent will be charged for any transfer. That comes
to $0.30 as an ongoing cost of your backup. Not too bad.

In March you allow anonymous read access to some of your
pictures and your friends download, say, 500MB of them.
As the files are owned by you, you are responsible for the
costs incurred. That means at the end of March you'll be
charged $0.30 for storage plus $0.09 for the download traffic
generated by your friends.

There is no minimum monthly contract or a setup fee. What
you use is what you pay for. At the beginning my bill used
to be like US$0.03 or even nil.

That's the pricing model of Amazon S3 in a nutshell. Check
Amazon S3 homepage at http://aws.amazon.com/s3 for more
details.

Needless to say that all these money are charged by Amazon
itself, there is obviously no payment for using S3cmd :-)

Amazon S3 basics
----------------
Files stored in S3 are called "objects" and their names are
officially called "keys". Each object belongs to exactly one
"bucket". Buckets are kind of directories or folders with
some restrictions: 1) each user can only have 100 buckets at
the most, 2) bucket names must be unique amongst all users
of S3, 3) buckets can not be nested into a deeper
hierarchy and 4) a name of a bucket can only consist of basic
alphanumeric characters plus dot (.) and dash (-). No spaces,
no accented or UTF-8 letters, etc.

On the other hand there are almost no restrictions on object
names ("keys"). These can be any UTF-8 strings of up to 1024
bytes long. Interestingly enough the object name can contain
forward slash character (/) thus a "my/funny/picture.jpg" is
a valid object name. Note that there are not directories nor
buckets called "my" and "funny" - it is really a single object
name called "my/funny/picture.jpg" and S3 does not care at
all that it _looks_ like a directory structure.

To describe objects in S3 storage we invented a URI-like
schema in the following form:

    s3://BUCKET/OBJECT

See the HowTo later in this document for example usages of
this S3-URI schema.

Simple S3cmd HowTo
------------------
1) Register for Amazon AWS / S3
   Go to http://aws.amazon.com/s3, click the "Sign up
   for web service" button in the right column and work
   through the registration. You will have to supply
   your Credit Card details in order to allow Amazon
   charge you for S3 usage.
   At the end you should posses your Access and Secret Keys

2) Run "s3cmd --configure"
   You will be asked for the two keys - copy and paste
   them from your confirmation email or from your Amazon
   account page. Be careful when copying them! They are
   case sensitive and must be entered accurately or you'll
   keep getting errors about invalid signatures or similar.

3) Run "s3cmd ls" to list all your buckets.
   As you just started using S3 there are no buckets owned by
   you as of now. So the output will be empty.

4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
   As mentioned above bucket names must be unique amongst
   _all_ users of S3. That means the simple names like "test"
   or "asdf" are already taken and you must make up something
   more original. I sometimes prefix my bucket names with
   my e-mail domain name (logix.cz) leading to a bucket name,
   for instance, 'logix.cz-test':

   ~$ s3cmd mb s3://logix.cz-test
   Bucket 'logix.cz-test' created

5) List your buckets again with "s3cmd ls"
   Now you should see your freshly created bucket

   ~$ s3cmd ls
   2007-01-19 01:41 s3://logix.cz-test

6) List the contents of the bucket

   ~$ s3cmd ls s3://logix.cz-test
   Bucket 'logix.cz-test':
   ~$

   It's empty, indeed.

7) Upload a file into the bucket

   ~$ s3cmd put addressbook.xml s3://logix.cz-test/addrbook.xml
   File 'addressbook.xml' stored as s3://logix.cz-test/addrbook.xml (123456 bytes)

8) Now we can list the bucket contents again

   ~$ s3cmd ls s3://logix.cz-test
   Bucket 'logix.cz-test':
   2007-01-19 01:46 120k s3://logix.cz-test/addrbook.xml

9) Retrieve the file back and verify that its hasn't been
   corrupted

   ~$ s3cmd get s3://logix.cz-test/addrbook.xml addressbook-2.xml
   Object s3://logix.cz-test/addrbook.xml saved as 'addressbook-2.xml' (123456 bytes)

   ~$ md5sum addressbook.xml addressbook-2.xml
   39bcb6992e461b269b95b3bda303addf addressbook.xml
   39bcb6992e461b269b95b3bda303addf addressbook-2.xml

   Checksums of the original file matches the one of the
   retrieved one. Looks like it worked :-)

10) Clean up: delete the object and remove the bucket

   ~$ s3cmd rb s3://logix.cz-test
   ERROR: S3 error: 409 (Conflict): BucketNotEmpty

   Ouch, we can only remove empty buckets!

   ~$ s3cmd del s3://logix.cz-test/addrbook.xml
   Object s3://logix.cz-test/addrbook.xml deleted

   ~$ s3cmd rb s3://logix.cz-test
   Bucket 'logix.cz-test' removed

Hints
-----
The basic usage is as simple as described in the previous
section.

You can increase the level of verbosity with -v option and
if you're really keen to know what the program does under
its bonet run it with -d to see all 'debugging' output.

After configuring it with --configure all available options
are spitted into your ~/.s3cfg file. It's a text file ready
to be modified in your favourite text editor.

Multiple local files may be specified for "s3cmd put"
operation. In that case the S3 URI should only include
the bucket name, not the object part:

~$ s3cmd put file-* s3://logix.cz-test/
File 'file-one.txt' stored as s3://logix.cz-test/file-one.txt (4 bytes)
File 'file-two.txt' stored as s3://logix.cz-test/file-two.txt (4 bytes)

Alternatively if you specify the object part as well it
will be treated as a prefix and all filenames given on the
command line will be appended to the prefix making up
the object name. However --force option is required in this
case:

~$ s3cmd put --force file-* s3://logix.cz-test/prefixed:
File 'file-one.txt' stored as s3://logix.cz-test/prefixed:file-one.txt (4 bytes)
File 'file-two.txt' stored as s3://logix.cz-test/prefixed:file-two.txt (4 bytes)

This prefixing mode works with "s3cmd ls" as well:

~$ s3cmd ls s3://logix.cz-test
Bucket 'logix.cz-test':
2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-one.txt
2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-two.txt

Now with a prefix to list only names beginning with "file-":

~$ s3cmd ls s3://logix.cz-test/file-*
Bucket 'logix.cz-test':
2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt

For more information refer to:
* S3cmd / S3tools homepage at http://s3tools.sourceforge.net
* Amazon S3 homepage at http://aws.amazon.com/s3

Enjoy!

Michal Ludvig
* michal@logix.cz
* http://www.logix.cz/michal

Something went wrong with that request. Please try again.