Permalink
Browse files

* README: Updated for 0.9.9

* s3cmd, S3/PkgInfo.py, s3cmd.1: Replaced project 
  URLs with http://s3tools.org
* NEWS: Improved message.



git-svn-id: https://s3tools.svn.sourceforge.net/svnroot/s3tools/s3cmd/trunk@372 830e0280-6d2a-0410-9c65-932aecc39d9d
  • Loading branch information...
1 parent 8567b8e commit 4927c909fa093c82f4e3fb4a1c6931d1619b4750 @mludvig mludvig committed Feb 14, 2009
Showing with 197 additions and 103 deletions.
  1. +7 −0 ChangeLog
  2. +2 −2 NEWS
  3. +180 −93 README
  4. +1 −1 S3/PkgInfo.py
  5. +6 −6 s3cmd
  6. +1 −1 s3cmd.1
View
7 ChangeLog
@@ -1,3 +1,10 @@
+2009-02-14 Michal Ludvig <michal@logix.cz>
+
+ * README: Updated for 0.9.9
+ * s3cmd, S3/PkgInfo.py, s3cmd.1: Replaced project
+ URLs with http://s3tools.org
+ * NEWS: Improved message.
+
2009-02-12 Michal Ludvig <michal@logix.cz>
* s3cmd: Added --list-md5 for 'ls' command.
View
4 NEWS
@@ -5,8 +5,8 @@ s3cmd 0.9.9
s3cmd 0.9.9-rc3 - 2009-02-02
===============
-* Fixed crash in S3Error().__str__() (typically Amazon's Internal
- errors, etc).
+* Fixed crash: AttributeError: 'S3Error' object has no attribute '_message'
+ (bug #2547322)
s3cmd 0.9.9-rc2 - 2009-01-30
===============
View
273 README
@@ -5,10 +5,17 @@ Author:
Michal Ludvig <michal@logix.cz>
S3tools / S3cmd project homepage:
- http://s3tools.sourceforge.net
+ http://s3tools.org
-S3tools / S3cmd mailing list:
- s3tools-general@lists.sourceforge.net
+S3tools / S3cmd mailing lists:
+ * Announcements of new releases:
+ s3tools-announce@lists.sourceforge.net
+
+ * General questions and discussion about usage
+ s3tools-general@lists.sourceforge.net
+
+ * Bug reports
+ s3tools-bugs@lists.sourceforge.net
Amazon S3 homepage:
http://aws.amazon.com/s3
@@ -79,49 +86,92 @@ to be like US$0.03 or even nil.
That's the pricing model of Amazon S3 in a nutshell. Check
Amazon S3 homepage at http://aws.amazon.com/s3 for more
-details.
+details.
Needless to say that all these money are charged by Amazon
itself, there is obviously no payment for using S3cmd :-)
Amazon S3 basics
----------------
-Files stored in S3 are called "objects" and their names are
-officially called "keys". Each object belongs to exactly one
-"bucket". Buckets are kind of directories or folders with
-some restrictions: 1) each user can only have 100 buckets at
-the most, 2) bucket names must be unique amongst all users
-of S3, 3) buckets can not be nested into a deeper
-hierarchy and 4) a name of a bucket can only consist of basic
-alphanumeric characters plus dot (.) and dash (-). No spaces,
-no accented or UTF-8 letters, etc.
-
-On the other hand there are almost no restrictions on object
-names ("keys"). These can be any UTF-8 strings of up to 1024
-bytes long. Interestingly enough the object name can contain
-forward slash character (/) thus a "my/funny/picture.jpg" is
-a valid object name. Note that there are not directories nor
-buckets called "my" and "funny" - it is really a single object
-name called "my/funny/picture.jpg" and S3 does not care at
-all that it _looks_ like a directory structure.
+Files stored in S3 are called "objects" and their names are
+officially called "keys". Since this is sometimes confusing
+for the users we often refer to the objects as "files" or
+"remote files". Each object belongs to exactly one "bucket".
To describe objects in S3 storage we invented a URI-like
schema in the following form:
+ s3://BUCKET
+or
s3://BUCKET/OBJECT
-See the HowTo later in this document for example usages of
-this S3-URI schema.
+Buckets
+-------
+Buckets are sort of like directories or folders with some
+restrictions:
+1) each user can only have 100 buckets at the most,
+2) bucket names must be unique amongst all users of S3,
+3) buckets can not be nested into a deeper hierarchy and
+4) a name of a bucket can only consist of basic alphanumeric
+ characters plus dot (.) and dash (-). No spaces, no accented
+ or UTF-8 letters, etc.
+
+It is a good idea to use DNS-compatible bucket names. That
+for instance means you should not use upper case characters.
+While DNS compliance is not strictly required some features
+described below are not available for DNS-incompatible named
+buckets. One more step further is using a fully qualified
+domain name (FQDN) for a bucket - that has even more benefits.
+
+* For example "s3://--My-Bucket--" is not DNS compatible.
+* On the other hand "s3://my-bucket" is DNS compatible but
+ is not FQDN.
+* Finally "s3://my-bucket.s3tools.org" is DNS compatible
+ and FQDN provided you own the s3tools.org domain and can
+ create the domain record for "my-bucket.s3tools.org".
+
+Look for "Virtual Hosts" later in this text for more details
+regarding FQDN named buckets.
+
+Objects (files stored in Amazon S3)
+-----------------------------------
+Unlike for buckets there are almost no restrictions on object
+names. These can be any UTF-8 strings of up to 1024 bytes long.
+Interestingly enough the object name can contain forward
+slash character (/) thus a "my/funny/picture.jpg" is a valid
+object name. Note that there are not directories nor
+buckets called "my" and "funny" - it is really a single object
+name called "my/funny/picture.jpg" and S3 does not care at
+all that it _looks_ like a directory structure.
+
+The full URI of such an image could be, for example:
-Simple S3cmd HowTo
+ s3://my-bucket/my/funny/picture.jpg
+
+Public vs Private files
+-----------------------
+The files stored in S3 can be either Private or Public. The
+Private ones are readable only by the user who uploaded them
+while the Public ones can be read by anyone. Additionally the
+Public files can be accessed using HTTP protocol, not only
+using s3cmd or a similar tool.
+
+The ACL (Access Control List) of a file can be set at the
+time of upload using --acl-public or --acl-private options
+with 's3cmd put' or 's3cmd sync' commands (see below).
+
+Alternatively the ACL can be altered for existing remote files
+with 's3cmd setacl --acl-public' (or --acl-private) command.
+
+Simple s3cmd HowTo
------------------
1) Register for Amazon AWS / S3
Go to http://aws.amazon.com/s3, click the "Sign up
for web service" button in the right column and work
through the registration. You will have to supply
your Credit Card details in order to allow Amazon
charge you for S3 usage.
- At the end you should posses your Access and Secret Keys
+ At the end you should have your Access and Secret Keys
2) Run "s3cmd --configure"
You will be asked for the two keys - copy and paste
@@ -135,66 +185,137 @@ Simple S3cmd HowTo
you as of now. So the output will be empty.
4) Make a bucket with "s3cmd mb s3://my-new-bucket-name"
- As mentioned above bucket names must be unique amongst
+ As mentioned above the bucket names must be unique amongst
_all_ users of S3. That means the simple names like "test"
or "asdf" are already taken and you must make up something
- more original. I sometimes prefix my bucket names with
- my e-mail domain name (logix.cz) leading to a bucket name,
- for instance, 'logix.cz-test':
+ more original. To demonstrate as many features as possible
+ let's create a FQDN-named bucket s3://public.s3tools.org:
- ~$ s3cmd mb s3://logix.cz-test
- Bucket 'logix.cz-test' created
+ ~$ s3cmd mb s3://public.s3tools.org
+ Bucket 's3://public.s3tools.org' created
5) List your buckets again with "s3cmd ls"
Now you should see your freshly created bucket
~$ s3cmd ls
- 2007-01-19 01:41 s3://logix.cz-test
+ 2009-01-28 12:34 s3://public.s3tools.org
6) List the contents of the bucket
- ~$ s3cmd ls s3://logix.cz-test
- Bucket 'logix.cz-test':
+ ~$ s3cmd ls s3://public.s3tools.org
~$
It's empty, indeed.
-7) Upload a file into the bucket
+7) Upload a single file into the bucket:
+
+ ~$ s3cmd put some-file.xml s3://public.s3tools.org/somefile.xml
+ some-file.xml -> s3://public.s3tools.org/somefile.xml [1 of 1]
+ 123456 of 123456 100% in 2s 51.75 kB/s done
+
+ Upload a two directory tree into the bucket's virtual 'directory':
+
+ ~$ s3cmd put --recursive dir1 dir2 s3://public.s3tools.org/somewhere/
+ File 'dir1/file1-1.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-1.txt' [1 of 5]
+ File 'dir1/file1-2.txt' stored as 's3://public.s3tools.org/somewhere/dir1/file1-2.txt' [2 of 5]
+ File 'dir1/file1-3.log' stored as 's3://public.s3tools.org/somewhere/dir1/file1-3.log' [3 of 5]
+ File 'dir2/file2-1.bin' stored as 's3://public.s3tools.org/somewhere/dir2/file2-1.bin' [4 of 5]
+ File 'dir2/file2-2.txt' stored as 's3://public.s3tools.org/somewhere/dir2/file2-2.txt' [5 of 5]
+
+ As you can see we didn't have to create the /somewhere
+ 'directory'. In fact it's only a filename prefix, not
+ a real directory and it doesn't have to be created in
+ any way beforehand.
+
+8) Now list the bucket contents again:
- ~$ s3cmd put addressbook.xml s3://logix.cz-test/addrbook.xml
- File 'addressbook.xml' stored as s3://logix.cz-test/addrbook.xml (123456 bytes)
+ ~$ s3cmd ls s3://public.s3tools.org
+ DIR s3://public.s3tools.org/somewhere/
+ 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml
-8) Now we can list the bucket contents again
+ Use --recursive (or -r) to list all the remote files:
- ~$ s3cmd ls s3://logix.cz-test
- Bucket 'logix.cz-test':
- 2007-01-19 01:46 120k s3://logix.cz-test/addrbook.xml
+ ~$ s3cmd ls s3://public.s3tools.org
+ 2009-02-10 05:10 123456 s3://public.s3tools.org/somefile.xml
+ 2009-02-10 05:13 18 s3://public.s3tools.org/somewhere/dir1/file1-1.txt
+ 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir1/file1-2.txt
+ 2009-02-10 05:13 16 s3://public.s3tools.org/somewhere/dir1/file1-3.log
+ 2009-02-10 05:13 11 s3://public.s3tools.org/somewhere/dir2/file2-1.bin
+ 2009-02-10 05:13 8 s3://public.s3tools.org/somewhere/dir2/file2-2.txt
-9) Retrieve the file back and verify that its hasn't been
- corrupted
+9) Retrieve one of the files back and verify that it hasn't been
+ corrupted:
- ~$ s3cmd get s3://logix.cz-test/addrbook.xml addressbook-2.xml
- Object s3://logix.cz-test/addrbook.xml saved as 'addressbook-2.xml' (123456 bytes)
+ ~$ s3cmd get s3://public.s3tools.org/somefile.xml some-file-2.xml
+ s3://public.s3tools.org/somefile.xml -> some-file-2.xml [1 of 1]
+ 123456 of 123456 100% in 3s 35.75 kB/s done
- ~$ md5sum addressbook.xml addressbook-2.xml
- 39bcb6992e461b269b95b3bda303addf addressbook.xml
- 39bcb6992e461b269b95b3bda303addf addressbook-2.xml
+ ~$ md5sum some-file.xml some-file-2.xml
+ 39bcb6992e461b269b95b3bda303addf some-file.xml
+ 39bcb6992e461b269b95b3bda303addf some-file-2.xml
Checksums of the original file matches the one of the
retrieved one. Looks like it worked :-)
-10) Clean up: delete the object and remove the bucket
+ To retrieve a whole 'directory tree' from S3 use recursive get:
- ~$ s3cmd rb s3://logix.cz-test
- ERROR: S3 error: 409 (Conflict): BucketNotEmpty
+ ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere
+ File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as './somewhere/dir1/file1-1.txt'
+ File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as './somewhere/dir1/file1-2.txt'
+ File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as './somewhere/dir1/file1-3.log'
+ File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as './somewhere/dir2/file2-1.bin'
+ File s3://public.s3tools.org/somewhere/dir2/file2-2.txt saved as './somewhere/dir2/file2-2.txt'
- Ouch, we can only remove empty buckets!
+ Since the destination directory wasn't specified s3cmd
+ saved the directory structure in a current working
+ directory ('.').
- ~$ s3cmd del s3://logix.cz-test/addrbook.xml
- Object s3://logix.cz-test/addrbook.xml deleted
+ There is an important difference between:
+ get s3://public.s3tools.org/somewhere
+ and
+ get s3://public.s3tools.org/somewhere/
+ (note the trailing slash)
+ S3cmd always uses the last path part, ie the word
+ after the last slash, for naming files.
+
+ In the case of s3://.../somewhere the last path part
+ is 'somewhere' and therefore the recursive get names
+ the local files as somewhere/dir1, somewhere/dir2, etc.
- ~$ s3cmd rb s3://logix.cz-test
- Bucket 'logix.cz-test' removed
+ On the other hand in s3://.../somewhere/ the last path
+ part is empty and s3cmd will only create 'dir1' and 'dir2'
+ without the 'somewhere/' prefix:
+
+ ~$ s3cmd get --recursive s3://public.s3tools.org/somewhere /tmp
+ File s3://public.s3tools.org/somewhere/dir1/file1-1.txt saved as '/tmp/dir1/file1-1.txt'
+ File s3://public.s3tools.org/somewhere/dir1/file1-2.txt saved as '/tmp/dir1/file1-2.txt'
+ File s3://public.s3tools.org/somewhere/dir1/file1-3.log saved as '/tmp/dir1/file1-3.log'
+ File s3://public.s3tools.org/somewhere/dir2/file2-1.bin saved as '/tmp/dir2/file2-1.bin'
+
+ See? It's /tmp/dir1 and not /tmp/somewhere/dir1 as it
+ was in the previous example.
+
+10) Clean up - delete the remote files and remove the bucket:
+
+ Remove everything under s3://public.s3tools.org/somewhere/
+
+ ~$ s3cmd del --recursive s3://public.s3tools.org/somewhere/
+ File s3://public.s3tools.org/somewhere/dir1/file1-1.txt deleted
+ File s3://public.s3tools.org/somewhere/dir1/file1-2.txt deleted
+ ...
+
+ Now try to remove the bucket:
+
+ ~$ s3cmd rb s3://public.s3tools.org
+ ERROR: S3 error: 409 (BucketNotEmpty): The bucket you tried to delete is not empty
+
+ Ouch, we forgot about s3://public.s3tools.org/somefile.xml
+ We can force the bucket removal anyway:
+
+ ~$ s3cmd rb --force s3://public.s3tools.org/
+ WARNING: Bucket is not empty. Removing all the objects from it first. This may take some time...
+ File s3://public.s3tools.org/somefile.xml deleted
+ Bucket 's3://public.s3tools.org/' removed
Hints
-----
@@ -207,44 +328,10 @@ its bonet run it with -d to see all 'debugging' output.
After configuring it with --configure all available options
are spitted into your ~/.s3cfg file. It's a text file ready
-to be modified in your favourite text editor.
-
-Multiple local files may be specified for "s3cmd put"
-operation. In that case the S3 URI should only include
-the bucket name, not the object part:
-
-~$ s3cmd put file-* s3://logix.cz-test/
-File 'file-one.txt' stored as s3://logix.cz-test/file-one.txt (4 bytes)
-File 'file-two.txt' stored as s3://logix.cz-test/file-two.txt (4 bytes)
-
-Alternatively if you specify the object part as well it
-will be treated as a prefix and all filenames given on the
-command line will be appended to the prefix making up
-the object name. However --force option is required in this
-case:
-
-~$ s3cmd put --force file-* s3://logix.cz-test/prefixed:
-File 'file-one.txt' stored as s3://logix.cz-test/prefixed:file-one.txt (4 bytes)
-File 'file-two.txt' stored as s3://logix.cz-test/prefixed:file-two.txt (4 bytes)
-
-This prefixing mode works with "s3cmd ls" as well:
-
-~$ s3cmd ls s3://logix.cz-test
-Bucket 'logix.cz-test':
-2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
-2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
-2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-one.txt
-2007-01-19 02:12 4 s3://logix.cz-test/prefixed:file-two.txt
-
-Now with a prefix to list only names beginning with "file-":
-
-~$ s3cmd ls s3://logix.cz-test/file-*
-Bucket 'logix.cz-test':
-2007-01-19 02:12 4 s3://logix.cz-test/file-one.txt
-2007-01-19 02:12 4 s3://logix.cz-test/file-two.txt
+to be modified in your favourite text editor.
For more information refer to:
-* S3cmd / S3tools homepage at http://s3tools.sourceforge.net
+* S3cmd / S3tools homepage at http://s3tools.org
* Amazon S3 homepage at http://aws.amazon.com/s3
Enjoy!
View
2 S3/PkgInfo.py
@@ -1,6 +1,6 @@
package = "s3cmd"
version = "0.9.9-rc3"
-url = "http://s3tools.logix.cz"
+url = "http://s3tools.org"
license = "GPL version 2"
short_description = "Command line tool for managing Amazon S3 and CloudFront services"
long_description = """
View
12 s3cmd
@@ -485,7 +485,7 @@ def cmd_object_del(args):
if Config().recursive and not Config().force:
raise ParameterError("Please use --force to delete ALL contents of %s" % uri)
elif not Config().recursive:
- raise ParameterError("Object name required, not only the bucket name")
+ raise ParameterError("File name required, not only the bucket name")
subcmd_object_del_uri(uri)
def subcmd_object_del_uri(uri, recursive = None):
@@ -504,7 +504,7 @@ def subcmd_object_del_uri(uri, recursive = None):
uri_list.append(uri)
for _uri in uri_list:
response = s3.object_delete(_uri)
- output(u"Object %s deleted" % _uri)
+ output(u"File %s deleted" % _uri)
def subcmd_cp_mv(args, process_fce, message):
src_uri = S3Uri(args.pop(0))
@@ -526,11 +526,11 @@ def subcmd_cp_mv(args, process_fce, message):
def cmd_cp(args):
s3 = S3(Config())
- subcmd_cp_mv(args, s3.object_copy, "Object %(src)s copied to %(dst)s")
+ subcmd_cp_mv(args, s3.object_copy, "File %(src)s copied to %(dst)s")
def cmd_mv(args):
s3 = S3(Config())
- subcmd_cp_mv(args, s3.object_move, "Object %(src)s moved to %(dst)s")
+ subcmd_cp_mv(args, s3.object_move, "File %(src)s moved to %(dst)s")
def cmd_info(args):
s3 = S3(Config())
@@ -1277,10 +1277,10 @@ def get_commands_list():
#{"cmd":"mkdir", "label":"Make a virtual S3 directory", "param":"s3://BUCKET/path/to/dir", "func":cmd_mkdir, "argc":1},
{"cmd":"sync", "label":"Synchronize a directory tree to S3", "param":"LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR", "func":cmd_sync, "argc":2},
{"cmd":"du", "label":"Disk usage by buckets", "param":"[s3://BUCKET[/PREFIX]]", "func":cmd_du, "argc":0},
- {"cmd":"info", "label":"Get various information about Buckets or Objects", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1},
+ {"cmd":"info", "label":"Get various information about Buckets or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_info, "argc":1},
{"cmd":"cp", "label":"Copy object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_cp, "argc":2},
{"cmd":"mv", "label":"Move object", "param":"s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]", "func":cmd_mv, "argc":2},
- {"cmd":"setacl", "label":"Modify Access control list for Bucket or Object", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1},
+ {"cmd":"setacl", "label":"Modify Access control list for Bucket or Files", "param":"s3://BUCKET[/OBJECT]", "func":cmd_setacl, "argc":1},
## CloudFront commands
{"cmd":"cflist", "label":"List CloudFront distribution points", "param":"", "func":CfCmd.info, "argc":0},
{"cmd":"cfinfo", "label":"Display CloudFront distribution point parameters", "param":"[cf://DIST_ID]", "func":CfCmd.info, "argc":0},
View
2 s3cmd.1
@@ -235,5 +235,5 @@ For the most up to date list of options run
.br
For more info about usage, examples and other related info visit project homepage at
.br
-.B http://s3tools.logix.cz
+.B http://s3tools.org

0 comments on commit 4927c90

Please sign in to comment.