Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Fetching contributors…

Cannot retrieve contributors at this time

file 52 lines (48 sloc) 2.507 kb
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
TODO list for s3cmd project
===========================

- Before 1.0.0 (or asap after 1.0.0)
  - Make 'sync s3://bkt/some-filename local/other-filename' work
    (at the moment it'll always download).
  - Enable --exclude for [ls].
  - Allow change /tmp to somewhere else
  - With --guess-mime use 'magic' module if available.
  - Support --preserve for [put] and [get]. Update manpage.
  - Don't let --continue fail if the file is already fully downloaded.
  - Option --mime-type should set mime type with 'cp' and 'mv'.
    If possible --guess-mime-type should do as well.
  - Make upload throttling configurable.
  - Allow removing 'DefaultRootObject' from CloudFront distributions.
  - Get s3://bucket/non-existent creates empty local file 'non-existent'
  - Add 'geturl' command, both Unicode and urlencoded output.
  - Add a command for generating "Query String Authentication" URLs.
  - Support --acl-grant (together with --acl-public/private) for [put] and [sync]
  - Filter 's3cmd ls' output by --bucket-location=

- After 1.0.0
  - Sync must backup non-files as well. At least directories,
    symlinks and device nodes.
  - Speed up upload / download with multiple threads.
    (see http://blog.50projects.com/p/s3cmd-modifications.html)
  - Sync should be able to update metadata (UID, timstamps, etc)
    if only these change (i.e. same content, different metainfo).
  - If GPG fails error() and exit. If un-GPG fails save the
    file with .gpg extension.
  - Keep backup files remotely on put/sync-to if requested
    (move the old 'object' to e.g. 'object~' and only then upload
     the new one). Could be more advanced to keep, say, last 5
     copies, etc.
  - Memory consumption on very large upload sets is terribly high.
  - Implement per-bucket (or per-regexp?) default settings. For
    example regarding ACLs, encryption, etc.

- Implement GPG for sync
  (it's not that easy since it won't be easy to compare
   the encrypted-remote-object size with local file.
   either we can store the metadata in a dedicated file
   where we face a risk of inconsistencies, or we'll store
   the metadata encrypted in each object header where we'll
   have to do large number for object/HEAD requests. tough
   call).
  Or we can only compare local timestamps with remote object
  timestamps. If the local one is older we'll *assume* it
  hasn't been changed. But what to do about remote2local sync?

- Keep man page up to date and write some more documentation
  - Yeah, right ;-)
Something went wrong with that request. Please try again.