Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Newer
Older
100644 545 lines (357 sloc) 20.731 kB
d390ede @marcel Moving aws-s3 into git
marcel authored
1 = AWS::S3
2
3 AWS::S3 is a Ruby library for Amazon's Simple Storage Service's REST API (http://aws.amazon.com/s3).
4 Full documentation of the currently supported API can be found at http://docs.amazonwebservices.com/AmazonS3/2006-03-01.
5
6 == Getting started
7
8 To get started you need to require 'aws/s3':
9
10 % irb -rubygems
11 irb(main):001:0> require 'aws/s3'
12 # => true
13
14 The AWS::S3 library ships with an interactive shell called <tt>s3sh</tt>. From within it, you have access to all the operations the library exposes from the command line.
15
16 % s3sh
17 >> Version
18
19 Before you can do anything, you must establish a connection using Base.establish_connection!. A basic connection would look something like this:
20
21 AWS::S3::Base.establish_connection!(
22 :access_key_id => 'abc',
23 :secret_access_key => '123'
24 )
25
26 The minimum connection options that you must specify are your access key id and your secret access key.
27
28 (If you don't already have your access keys, all you need to sign up for the S3 service is an account at Amazon. You can sign up for S3 and get access keys by visiting http://aws.amazon.com/s3.)
29
30 For convenience, if you set two special environment variables with the value of your access keys, the console will automatically create a default connection for you. For example:
31
32 % cat .amazon_keys
33 export AMAZON_ACCESS_KEY_ID='abcdefghijklmnop'
34 export AMAZON_SECRET_ACCESS_KEY='1234567891012345'
35
36 Then load it in your shell's rc file.
37
38 % cat .zshrc
39 if [[ -f "$HOME/.amazon_keys" ]]; then
40 source "$HOME/.amazon_keys";
41 fi
42
43 See more connection details at AWS::S3::Connection::Management::ClassMethods.
44
45
46 == AWS::S3 Basics
47 === The service, buckets and objects
48
49 The three main concepts of S3 are the service, buckets and objects.
50
51 ==== The service
52
53 The service lets you find out general information about your account, like what buckets you have.
54
55 Service.buckets
56 # => []
57
58
59 ==== Buckets
60
61 Buckets are containers for objects (the files you store on S3). To create a new bucket you just specify its name.
62
63 # Pick a unique name, or else you'll get an error
64 # if the name is already taken.
65 Bucket.create('jukebox')
66
67 Bucket names must be unique across the entire S3 system, sort of like domain names across the internet. If you try
68 to create a bucket with a name that is already taken, you will get an error.
69
70 Assuming the name you chose isn't already taken, your new bucket will now appear in the bucket list:
71
72 Service.buckets
73 # => [#<AWS::S3::Bucket @attributes={"name"=>"jukebox"}>]
74
75 Once you have succesfully created a bucket you can you can fetch it by name using Bucket.find.
76
77 music_bucket = Bucket.find('jukebox')
78
79 The bucket that is returned will contain a listing of all the objects in the bucket.
80
81 music_bucket.objects.size
82 # => 0
83
84 If all you are interested in is the objects of the bucket, you can get to them directly using Bucket.objects.
85
86 Bucket.objects('jukebox').size
87 # => 0
88
89 By default all objects will be returned, though there are several options you can use to limit what is returned, such as
90 specifying that only objects whose name is after a certain place in the alphabet be returned, and etc. Details about these options can
91 be found in the documentation for Bucket.find.
92
93 To add an object to a bucket you specify the name of the object, its value, and the bucket to put it in.
94
95 file = 'black-flowers.mp3'
96 S3Object.store(file, open(file), 'jukebox')
97
98 You'll see your file has been added to it:
99
100 music_bucket.objects
101 # => [#<AWS::S3::S3Object '/jukebox/black-flowers.mp3'>]
102
103 You can treat your bucket like a hash and access objects by name:
104
105 jukebox['black-flowers.mp3']
106 # => #<AWS::S3::S3Object '/jukebox/black-flowers.mp3'>
107
108 In the event that you want to delete a bucket, you can use Bucket.delete.
109
110 Bucket.delete('jukebox')
111
112 Keep in mind, like unix directories, you can not delete a bucket unless it is empty. Trying to delete a bucket
113 that contains objects will raise a BucketNotEmpty exception.
114
115 Passing the :force => true option to delete will take care of deleting all the bucket's objects for you.
116
117 Bucket.delete('photos', :force => true)
118 # => true
119
120
121 ==== Objects
122
123 S3Objects represent the data you store on S3. They have a key (their name) and a value (their data). All objects belong to a
124 bucket.
125
126 You can store an object on S3 by specifying a key, its data and the name of the bucket you want to put it in:
127
128 S3Object.store('me.jpg', open('headshot.jpg'), 'photos')
129
130 The content type of the object will be inferred by its extension. If the appropriate content type can not be inferred, S3 defaults
cae12c0 @marcel Update typo in README
marcel authored
131 to <tt>binary/octet-stream</tt>.
d390ede @marcel Moving aws-s3 into git
marcel authored
132
133 If you want to override this, you can explicitly indicate what content type the object should have with the <tt>:content_type</tt> option:
134
135 file = 'black-flowers.m4a'
136 S3Object.store(
137 file,
138 open(file),
139 'jukebox',
140 :content_type => 'audio/mp4a-latm'
141 )
142
143 You can read more about storing files on S3 in the documentation for S3Object.store.
144
145 If you just want to fetch an object you've stored on S3, you just specify its name and its bucket:
146
147 picture = S3Object.find 'headshot.jpg', 'photos'
148
149 N.B. The actual data for the file is not downloaded in both the example where the file appeared in the bucket and when fetched directly.
150 You get the data for the file like this:
151
152 picture.value
153
154 You can fetch just the object's data directly:
155
156 S3Object.value 'headshot.jpg', 'photos'
157
158 Or stream it by passing a block to <tt>stream</tt>:
159
160 open('song.mp3', 'w') do |file|
161 S3Object.stream('song.mp3', 'jukebox') do |chunk|
162 file.write chunk
163 end
164 end
165
166 The data of the file, once download, is cached, so subsequent calls to <tt>value</tt> won't redownload the file unless you
167 tell the object to reload its <tt>value</tt>:
168
169 # Redownloads the file's data
170 song.value(:reload)
171
172 Other functionality includes:
173
174 # Check if an object exists?
175 S3Object.exists? 'headshot.jpg', 'photos'
176
177 # Copying an object
178 S3Object.copy 'headshot.jpg', 'headshot2.jpg', 'photos'
179
180 # Renaming an object
181 S3Object.rename 'headshot.jpg', 'portrait.jpg', 'photos'
182
183 # Deleting an object
184 S3Object.delete 'headshot.jpg', 'photos'
185
186 ==== More about objects and their metadata
187
188 You can find out the content type of your object with the <tt>content_type</tt> method:
189
190 song.content_type
191 # => "audio/mpeg"
192
193 You can change the content type as well if you like:
194
195 song.content_type = 'application/pdf'
196 song.store
197
198 (Keep in mind that due to limitiations in S3's exposed API, the only way to change things like the content_type
199 is to PUT the object onto S3 again. In the case of large files, this will result in fully re-uploading the file.)
200
201 A bevie of information about an object can be had using the <tt>about</tt> method:
202
203 pp song.about
204 {"last-modified" => "Sat, 28 Oct 2006 21:29:26 GMT",
cae12c0 @marcel Update typo in README
marcel authored
205 "content-type" => "binary/octet-stream",
d390ede @marcel Moving aws-s3 into git
marcel authored
206 "etag" => "\"dc629038ffc674bee6f62eb64ff3a\"",
207 "date" => "Sat, 28 Oct 2006 21:30:41 GMT",
208 "x-amz-request-id" => "B7BC68F55495B1C8",
209 "server" => "AmazonS3",
210 "content-length" => "3418766"}
211
212 You can get and set metadata for an object:
213
214 song.metadata
215 # => {}
216 song.metadata[:album] = "A River Ain't Too Much To Love"
217 # => "A River Ain't Too Much To Love"
218 song.metadata[:released] = 2005
219 pp song.metadata
220 {"x-amz-meta-released" => 2005,
221 "x-amz-meta-album" => "A River Ain't Too Much To Love"}
222 song.store
223
224 That metadata will be saved in S3 and is hence forth available from that object:
225
226 song = S3Object.find('black-flowers.mp3', 'jukebox')
227 pp song.metadata
228 {"x-amz-meta-released" => "2005",
229 "x-amz-meta-album" => "A River Ain't Too Much To Love"}
96d3cb1 @marcel For now just redefine __method__ to take arguments so we don't break …
marcel authored
230 song.metadata[:released]
d390ede @marcel Moving aws-s3 into git
marcel authored
231 # => "2005"
96d3cb1 @marcel For now just redefine __method__ to take arguments so we don't break …
marcel authored
232 song.metadata[:released] = 2006
233 pp song.metadata
d390ede @marcel Moving aws-s3 into git
marcel authored
234 {"x-amz-meta-released" => 2006,
235 "x-amz-meta-album" => "A River Ain't Too Much To Love"}
236
237
238 ==== Streaming uploads
239
240 When storing an object on the S3 servers using S3Object.store, the <tt>data</tt> argument can be a string or an I/O stream.
241 If <tt>data</tt> is an I/O stream it will be read in segments and written to the socket incrementally. This approach
242 may be desirable for very large files so they are not read into memory all at once.
243
244 # Non streamed upload
245 S3Object.store('greeting.txt', 'hello world!', 'marcel')
246
247 # Streamed upload
248 S3Object.store('roots.mpeg', open('roots.mpeg'), 'marcel')
249
250
251 == Setting the current bucket
252 ==== Scoping operations to a specific bucket
253
254 If you plan on always using a specific bucket for certain files, you can skip always having to specify the bucket by creating
255 a subclass of Bucket or S3Object and telling it what bucket to use:
256
257 class JukeBoxSong < AWS::S3::S3Object
258 set_current_bucket_to 'jukebox'
259 end
260
261 For all methods that take a bucket name as an argument, the current bucket will be used if the bucket name argument is omitted.
262
263 other_song = 'baby-please-come-home.mp3'
264 JukeBoxSong.store(other_song, open(other_song))
265
266 This time we didn't have to explicitly pass in the bucket name, as the JukeBoxSong class knows that it will
267 always use the 'jukebox' bucket.
268
269 "Astute readers", as they say, may have noticed that we used the third parameter to pass in the content type,
270 rather than the fourth parameter as we had the last time we created an object. If the bucket can be inferred, or
271 is explicitly set, as we've done in the JukeBoxSong class, then the third argument can be used to pass in
272 options.
273
274 Now all operations that would have required a bucket name no longer do.
275
276 other_song = JukeBoxSong.find('baby-please-come-home.mp3')
277
278
279 == BitTorrent
280 ==== Another way to download large files
281
282 Objects on S3 can be distributed via the BitTorrent file sharing protocol.
283
284 You can get a torrent file for an object by calling <tt>torrent_for</tt>:
285
286 S3Object.torrent_for 'kiss.jpg', 'marcel'
287
288 Or just call the <tt>torrent</tt> method if you already have the object:
289
290 song = S3Object.find 'kiss.jpg', 'marcel'
291 song.torrent
292
293 Calling <tt>grant_torrent_access_to</tt> on a object will allow anyone to anonymously
294 fetch the torrent file for that object:
295
296 S3Object.grant_torrent_access_to 'kiss.jpg', 'marcel'
297
298 Anonymous requests to
299
300 http://s3.amazonaws.com/marcel/kiss.jpg?torrent
301
302 will serve up the torrent file for that object.
303
304
305 == Access control
306 ==== Using canned access control policies
307
308 By default buckets are private. This means that only the owner has access rights to the bucket and its objects.
309 Objects in that bucket inherit the permission of the bucket unless otherwise specified. When an object is private, the owner can
310 generate a signed url that exposes the object to anyone who has that url. Alternatively, buckets and objects can be given other
311 access levels. Several canned access levels are defined:
312
313 * <tt>:private</tt> - Owner gets FULL_CONTROL. No one else has any access rights. This is the default.
314 * <tt>:public_read</tt> - Owner gets FULL_CONTROL and the anonymous principal is granted READ access. If this policy is used on an object, it can be read from a browser with no authentication.
315 * <tt>:public_read_write</tt> - Owner gets FULL_CONTROL, the anonymous principal is granted READ and WRITE access. This is a useful policy to apply to a bucket, if you intend for any anonymous user to PUT objects into the bucket.
316 * <tt>:authenticated_read</tt> - Owner gets FULL_CONTROL, and any principal authenticated as a registered Amazon S3 user is granted READ access.
317
318 You can set a canned access level when you create a bucket or an object by using the <tt>:access</tt> option:
319
320 S3Object.store(
321 'kiss.jpg',
322 data,
323 'marcel',
324 :access => :public_read
325 )
326
327 Since the image we created is publicly readable, we can access it directly from a browser by going to the corresponding bucket name
328 and specifying the object's key without a special authenticated url:
329
330 http://s3.amazonaws.com/marcel/kiss.jpg
331
332 ==== Building custum access policies
333
334 For both buckets and objects, you can use the <tt>acl</tt> method to see its access control policy:
335
336 policy = S3Object.acl('kiss.jpg', 'marcel')
337 pp policy.grants
338 [#<AWS::S3::ACL::Grant FULL_CONTROL to noradio>,
339 #<AWS::S3::ACL::Grant READ to AllUsers Group>]
340
341 Policies are made up of one or more grants which grant a specific permission to some grantee. Here we see the default FULL_CONTROL grant
342 to the owner of this object. There is also READ permission granted to the Allusers Group, which means anyone has read access for the object.
343
344 Say we wanted to grant access to anyone to read the access policy of this object. The current READ permission only grants them permission to read
345 the object itself (for example, from a browser) but it does not allow them to read the access policy. For that we will need to grant the AllUsers group the READ_ACP permission.
346
347 First we'll create a new grant object:
348
349 grant = ACL::Grant.new
350 # => #<AWS::S3::ACL::Grant (permission) to (grantee)>
351 grant.permission = 'READ_ACP'
352
353 Now we need to indicate who this grant is for. In other words, who the grantee is:
354
355 grantee = ACL::Grantee.new
356 # => #<AWS::S3::ACL::Grantee (xsi not set yet)>
357
358 There are three ways to specify a grantee: 1) by their internal amazon id, such as the one returned with an object's Owner,
359 2) by their Amazon account email address or 3) by specifying a group. As of this writing you can not create custom groups, but
360 Amazon does provide three already: AllUsers, Authenticated and LogDelivery. In this case we want to provide the grant to all users.
361 This effectively means "anyone".
362
363 grantee.group = 'AllUsers'
364
365 Now that our grantee is setup, we'll associate it with the grant:
366
367 grant.grantee = grantee
368 grant
369 # => #<AWS::S3::ACL::Grant READ_ACP to AllUsers Group>
370
371 Are grant has all the information we need. Now that it's ready, we'll add it on to the object's access control policy's list of grants:
372
373 policy.grants << grant
374 pp policy.grants
375 [#<AWS::S3::ACL::Grant FULL_CONTROL to noradio>,
376 #<AWS::S3::ACL::Grant READ to AllUsers Group>,
377 #<AWS::S3::ACL::Grant READ_ACP to AllUsers Group>]
378
379 Now that the policy has the new grant, we reuse the <tt>acl</tt> method to persist the policy change:
380
381 S3Object.acl('kiss.jpg', 'marcel', policy)
382
383 If we fetch the object's policy again, we see that the grant has been added:
384
385 pp S3Object.acl('kiss.jpg', 'marcel').grants
386 [#<AWS::S3::ACL::Grant FULL_CONTROL to noradio>,
387 #<AWS::S3::ACL::Grant READ to AllUsers Group>,
388 #<AWS::S3::ACL::Grant READ_ACP to AllUsers Group>]
389
390 If we were to access this object's acl url from a browser:
391
392 http://s3.amazonaws.com/marcel/kiss.jpg?acl
393
394 we would be shown its access control policy.
395
396 ==== Pre-prepared grants
397
398 Alternatively, the ACL::Grant class defines a set of stock grant policies that you can fetch by name. In most cases, you can
399 just use one of these pre-prepared grants rather than building grants by hand. Two of these stock policies are <tt>:public_read</tt>
400 and <tt>:public_read_acp</tt>, which happen to be the two grants that we built by hand above. In this case we could have simply written:
401
402 policy.grants << ACL::Grant.grant(:public_read)
403 policy.grants << ACL::Grant.grant(:public_read_acp)
404 S3Object.acl('kiss.jpg', 'marcel', policy)
405
406 The full details can be found in ACL::Policy, ACL::Grant and ACL::Grantee.
407
408
409 ==== Accessing private objects from a browser
410
411 All private objects are accessible via an authenticated GET request to the S3 servers. You can generate an
412 authenticated url for an object like this:
413
414 S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
415
416 By default authenticated urls expire 5 minutes after they were generated.
417
418 Expiration options can be specified either with an absolute time since the epoch with the <tt>:expires</tt> options,
419 or with a number of seconds relative to now with the <tt>:expires_in</tt> options:
420
421 # Absolute expiration date
422 # (Expires January 18th, 2038)
423 doomsday = Time.mktime(2038, 1, 18).to_i
424 S3Object.url_for('beluga_baby.jpg',
425 'marcel',
426 :expires => doomsday)
427
428 # Expiration relative to now specified in seconds
429 # (Expires in 3 hours)
430 S3Object.url_for('beluga_baby.jpg',
431 'marcel',
432 :expires_in => 60 * 60 * 3)
433
434 You can specify whether the url should go over SSL with the <tt>:use_ssl</tt> option:
435
436 # Url will use https protocol
437 S3Object.url_for('beluga_baby.jpg',
438 'marcel',
439 :use_ssl => true)
440
441 By default, the ssl settings for the current connection will be used.
442
443 If you have an object handy, you can use its <tt>url</tt> method with the same objects:
444
445 song.url(:expires_in => 30)
446
447 To get an unauthenticated url for the object, such as in the case
448 when the object is publicly readable, pass the
449 <tt>:authenticated</tt> option with a value of <tt>false</tt>.
450
451 S3Object.url_for('beluga_baby.jpg',
452 'marcel',
453 :authenticated => false)
454 # => http://s3.amazonaws.com/marcel/beluga_baby.jpg
455
456
457 == Logging
458 ==== Tracking requests made on a bucket
459
460 A bucket can be set to log the requests made on it. By default logging is turned off. You can check if a bucket has logging enabled:
461
462 Bucket.logging_enabled_for? 'jukebox'
463 # => false
464
465 Enabling it is easy:
466
467 Bucket.enable_logging_for('jukebox')
468
469 Unless you specify otherwise, logs will be written to the bucket you want to log. The logs are just like any other object. By default they will start with the prefix 'log-'. You can customize what bucket you want the logs to be delivered to, as well as customize what the log objects' key is prefixed with by setting the <tt>target_bucket</tt> and <tt>target_prefix</tt> option:
470
471 Bucket.enable_logging_for(
472 'jukebox', 'target_bucket' => 'jukebox-logs'
473 )
474
475 Now instead of logging right into the jukebox bucket, the logs will go into the bucket called jukebox-logs.
476
477 Once logs have accumulated, you can access them using the <tt>logs</tt> method:
478
479 pp Bucket.logs('jukebox')
480 [#<AWS::S3::Logging::Log '/jukebox-logs/log-2006-11-14-07-15-24-2061C35880A310A1'>,
481 #<AWS::S3::Logging::Log '/jukebox-logs/log-2006-11-14-08-15-27-D8EEF536EC09E6B3'>,
482 #<AWS::S3::Logging::Log '/jukebox-logs/log-2006-11-14-08-15-29-355812B2B15BD789'>]
483
484 Each log has a <tt>lines</tt> method that gives you information about each request in that log. All the fields are available
485 as named methods. More information is available in Logging::Log::Line.
486
487 logs = Bucket.logs('jukebox')
488 log = logs.first
489 line = log.lines.first
490 line.operation
491 # => 'REST.GET.LOGGING_STATUS'
492 line.request_uri
493 # => 'GET /jukebox?logging HTTP/1.1'
494 line.remote_ip
495 # => "67.165.183.125"
496
497 Disabling logging is just as simple as enabling it:
498
499 Bucket.disable_logging_for('jukebox')
500
501
502 == Errors
503 ==== When things go wrong
504
505 Anything you do that makes a request to S3 could result in an error. If it does, the AWS::S3 library will raise an exception
506 specific to the error. All exception that are raised as a result of a request returning an error response inherit from the
507 ResponseError exception. So should you choose to rescue any such exception, you can simple rescue ResponseError.
508
509 Say you go to delete a bucket, but the bucket turns out to not be empty. This results in a BucketNotEmpty error (one of the many
510 errors listed at http://docs.amazonwebservices.com/AmazonS3/2006-03-01/ErrorCodeList.html):
511
512 begin
513 Bucket.delete('jukebox')
514 rescue ResponseError => error
515 # ...
516 end
517
518 Once you've captured the exception, you can extract the error message from S3, as well as the full error response, which includes
519 things like the HTTP response code:
520
521 error
522 # => #<AWS::S3::BucketNotEmpty The bucket you tried to delete is not empty>
523 error.message
524 # => "The bucket you tried to delete is not empty"
525 error.response.code
526 # => 409
527
528 You could use this information to redisplay the error in a way you see fit, or just to log the error and continue on.
529
530
531 ==== Accessing the last request's response
532
533 Sometimes methods that make requests to the S3 servers return some object, like a Bucket or an S3Object.
534 Othertimes they return just <tt>true</tt>. Other times they raise an exception that you may want to rescue. Despite all these
535 possible outcomes, every method that makes a request stores its response object for you in Service.response. You can always
536 get to the last request's response via Service.response.
537
538 objects = Bucket.objects('jukebox')
539 Service.response.success?
540 # => true
541
542 This is also useful when an error exception is raised in the console which you weren't expecting. You can
543 root around in the response to get more details of what might have gone wrong.
544
545
Something went wrong with that request. Please try again.