Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Newer
Older
100644 256 lines (191 sloc) 8.836 kb
f8f436e @mludvig Added comprehensive README file
mludvig authored
1 S3cmd tool for Amazon Simple Storage Service (S3)
2 =================================================
3
4 Author:
5 Michal Ludvig <michal@logix.cz>
6
7 S3tools / S3cmd project homepage:
8 http://s3tools.sourceforge.net
9
c3bc0a8 @mludvig Added more documentation.
mludvig authored
10 S3tools / S3cmd mailing list:
11 s3tools-general@lists.sourceforge.net
12
f8f436e @mludvig Added comprehensive README file
mludvig authored
13 Amazon S3 homepage:
14 http://aws.amazon.com/s3
15
c3bc0a8 @mludvig Added more documentation.
mludvig authored
16 !!!
17 !!! Please consult INSTALL file for installation instructions!
18 !!!
19
f8f436e @mludvig Added comprehensive README file
mludvig authored
20 What is Amazon S3
21 -----------------
22 Amazon S3 provides a managed internet-accessible storage
23 service where anyone can store any amount of data and
24 retrieve it later again. Maximum amount of data in one
25 "object" is 5GB, maximum number of objects is not limited.
26
27 S3 is a paid service operated by the well known Amazon.com
28 internet book shop. Before storing anything into S3 you
29 must sign up for an "AWS" account (where AWS = Amazon Web
30 Services) to obtain a pair of identifiers: Access Key and
31 Secret Key. You will need to give these keys to S3cmd.
32 Think of them as if they were a username and password for
33 your S3 account.
34
35 Pricing explained
36 -----------------
c5a458d @mludvig 2007-08-13 Michal Ludvig <michal@logix.cz>
mludvig authored
37 At the time of this writing the costs of using S3 are (in USD):
38
39 $0.15 per GB per month of storage space used
40
41 plus
42
43 $0.10 per GB - all data uploaded
44
45 plus
46
47 $0.18 per GB - first 10 TB / month data downloaded
48 $0.16 per GB - next 40 TB / month data downloaded
49 $0.13 per GB - data downloaded / month over 50 TB
50
51 plus
52
53 $0.01 per 1,000 PUT or LIST requests
54 $0.01 per 10,000 GET and all other requests
f8f436e @mludvig Added comprehensive README file
mludvig authored
55
56 If for instance on 1st of January you upload 2GB of
57 photos in JPEG from your holiday in New Zealand, at the
58 end of January you will be charged $0.30 for using 2GB of
c5a458d @mludvig 2007-08-13 Michal Ludvig <michal@logix.cz>
mludvig authored
59 storage space for a month, $0.20 for uploading 2GB
60 of data, and a few cents for requests.
61 That comes to slightly over $0.50 for a complete backup
62 of your precious holiday pictures.
f8f436e @mludvig Added comprehensive README file
mludvig authored
63
64 In February you don't touch it. Your data are still on S3
65 servers so you pay $0.30 for those two gigabytes, but not
66 a single cent will be charged for any transfer. That comes
67 to $0.30 as an ongoing cost of your backup. Not too bad.
68
69 In March you allow anonymous read access to some of your
70 pictures and your friends download, say, 500MB of them.
71 As the files are owned by you, you are responsible for the
72 costs incurred. That means at the end of March you'll be
c5a458d @mludvig 2007-08-13 Michal Ludvig <michal@logix.cz>
mludvig authored
73 charged $0.30 for storage plus $0.09 for the download traffic
f8f436e @mludvig Added comprehensive README file
mludvig authored
74 generated by your friends.
75
76 There is no minimum monthly contract or a setup fee. What
77 you use is what you pay for. At the beginning my bill used
78 to be like US$0.03 or even nil.
79
80 That's the pricing model of Amazon S3 in a nutshell. Check
81 Amazon S3 homepage at http://aws.amazon.com/s3 for more
82 details.
83
84 Needless to say that all these money are charged by Amazon
85 itself, there is obviously no payment for using S3cmd :-)
86
87 Amazon S3 basics
88 ----------------
89 Files stored in S3 are called "objects" and their names are
90 officially called "keys". Each object belongs to exactly one
91 "bucket". Buckets are kind of directories or folders with
92 some restrictions: 1) each user can only have 100 buckets at
93 the most, 2) bucket names must be unique amongst all users
94 of S3, 3) buckets can not be nested into a deeper
95 hierarchy and 4) a name of a bucket can only consist of basic
96 alphanumeric characters plus dot (.) and dash (-). No spaces,
97 no accented or UTF-8 letters, etc.
98
99 On the other hand there are almost no restrictions on object
100 names ("keys"). These can be any UTF-8 strings of up to 1024
101 bytes long. Interestingly enough the object name can contain
102 forward slash character (/) thus a "my/funny/picture.jpg" is
103 a valid object name. Note that there are not directories nor
104 buckets called "my" and "funny" - it is really a single object
105 name called "my/funny/picture.jpg" and S3 does not care at
106 all that it _looks_ like a directory structure.
107
108 To describe objects in S3 storage we invented a URI-like
109 schema in the following form:
110
111 s3://BUCKET/OBJECT
112
113 See the HowTo later in this document for example usages of
114 this S3-URI schema.
115
116 Simple S3cmd HowTo
117 ------------------
118 1) Register for Amazon AWS / S3
119 Go to http://aws.amazon.com/s3, click the "Sign up
120 for web service" button in the right column and work
121 through the registration. You will have to supply
122 your Credit Card details in order to allow Amazon
123 charge you for S3 usage.
124 At the end you should posses your Access and Secret Keys
125
126 2) Run "s3cmd --configure"
127 You will be asked for the two keys - copy and paste
128 them from your confirmation email or from your Amazon
129 account page. Be careful when copying them! They are
130 case sensitive and must be entered accurately or you'll
131 keep getting errors about invalid signatures or similar.
132
133 3) Run "s3cmd ls" to list all your buckets.
134 As you just started using S3 there are no buckets owned by
135 you as of now. So the output will be empty.
136
137 4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
138 As mentioned above bucket names must be unique amongst
139 _all_ users of S3. That means the simple names like "test"
140 or "asdf" are already taken and you must make up something
141 more original. I sometimes prefix my bucket names with
142 my e-mail domain name (logix.cz) leading to a bucket name,
143 for instance, 'logix.cz-test':
144
145 ~$ s3cmd mb s3://logix.cz-test
146 Bucket 'logix.cz-test' created
147
148 5) List your buckets again with "s3cmd ls"
149 Now you should see your freshly created bucket
150
151 ~$ s3cmd ls
152 2007-01-19 01:41 s3://logix.cz-test
153
154 6) List the contents of the bucket
155
156 ~$ s3cmd ls s3://logix.cz-test
157 Bucket 'logix.cz-test':
158 ~$
159
160 It's empty, indeed.
161
162 7) Upload a file into the bucket
163
164 ~$ s3cmd put addressbook.xml s3://logix.cz-test/addrbook.xml
165 File 'addressbook.xml' stored as s3://logix.cz-test/addrbook.xml (123456 bytes)
166
167 8) Now we can list the bucket contents again
168
169 ~$ s3cmd ls s3://logix.cz-test
170 Bucket 'logix.cz-test':
171 2007-01-19 01:46 120k s3://logix.cz-test/addrbook.xml
172
173 9) Retrieve the file back and verify that its hasn't been
174 corrupted
175
176 ~$ s3cmd get s3://logix.cz-test/addrbook.xml addressbook-2.xml
177 Object s3://logix.cz-test/addrbook.xml saved as 'addressbook-2.xml' (123456 bytes)
178
179 ~$ md5sum addressbook.xml addressbook-2.xml
180 39bcb6992e461b269b95b3bda303addf addressbook.xml
181 39bcb6992e461b269b95b3bda303addf addressbook-2.xml
182
183 Checksums of the original file matches the one of the
184 retrieved one. Looks like it worked :-)
185
186 10) Clean up: delete the object and remove the bucket
187
188 ~$ s3cmd rb s3://logix.cz-test
189 ERROR: S3 error: 409 (Conflict): BucketNotEmpty
190
191 Ouch, we can only remove empty buckets!
192
193 ~$ s3cmd del s3://logix.cz-test/addrbook.xml
194 Object s3://logix.cz-test/addrbook.xml deleted
195
196 ~$ s3cmd rb s3://logix.cz-test
197 Bucket 'logix.cz-test' removed
198
199 Hints
200 -----
201 The basic usage is as simple as described in the previous
202 section.
203
204 You can increase the level of verbosity with -v option and
205 if you're really keen to know what the program does under
206 its bonet run it with -d to see all 'debugging' output.
207
208 After configuring it with --configure all available options
209 are spitted into your ~/.s3cfg file. It's a text file ready
210 to be modified in your favourite text editor.
211
212 Multiple local files may be specified for "s3cmd put"
213 operation. In that case the S3 URI should only include
214 the bucket name, not the object part:
215
216 ~$ s3cmd put file-* s3://logix.cz-test/
217 File 'file-one.txt' stored as s3://logix.cz-test/file-one.txt (4 bytes)
218 File 'file-two.txt' stored as s3://logix.cz-test/file-two.txt (4 bytes)
219
220 Alternatively if you specify the object part as well it
221 will be treated as a prefix and all filenames given on the
222 command line will be appended to the prefix making up
223 the object name. However --force option is required in this
224 case:
225
226 ~$ s3cmd put --force file-* s3://logix.cz-test/prefixed:
227 File 'file-one.txt' stored as s3://logix.cz-test/prefixed:file-one.txt (4 bytes)
228 File 'file-two.txt' stored as s3://logix.cz-test/prefixed:file-two.txt (4 bytes)
229
230 This prefixing mode works with "s3cmd ls" as well:
231
232 ~$ s3cmd ls s3://logix.cz-test
233 Bucket 'logix.cz-test':
234 2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
235 2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
236 2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-one.txt
237 2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-two.txt
238
239 Now with a prefix to list only names beginning with "file-":
240
241 ~$ s3cmd ls s3://logix.cz-test/file-*
242 Bucket 'logix.cz-test':
243 2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
244 2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
245
246 For more information refer to:
247 * S3cmd / S3tools homepage at http://s3tools.sourceforge.net
248 * Amazon S3 homepage at http://aws.amazon.com/s3
249
250 Enjoy!
251
252 Michal Ludvig
253 * michal@logix.cz
254 * http://www.logix.cz/michal
255
Something went wrong with that request. Please try again.