Cache local md5 #60

Merged
merged 18 commits into from Feb 19, 2013

2 participants

@mdomsch
s3tools member

Tree on top of the parallel-destinations patch. This adds a cache for local tree file md5 values. This should decrease local disk I/O for unchanged files (the common case for Fedora Infrastructure) back to levels seen before the copy-hardlinks tree that introduced local disk I/O to calculate md5 for all files.

mdomsch added some commits Feb 24, 2012
@mdomsch mdomsch Apply excludes/includes at local os.walk() time 2e4769e
@mdomsch mdomsch add --delete-after option for sync 3b3727d
@mdomsch mdomsch add more --delete-after to sync variations 5ca02bd
@mdomsch mdomsch Merge remote-tracking branch 'origin/master' into merge b40aa2a
@mdomsch mdomsch Merge branch 'delete-after' into merge 598402b
@mdomsch mdomsch add Config.delete_after b62ce58
@mdomsch mdomsch Merge branch 'delete-after' into merge e1fe732
@mdomsch mdomsch fix os.walk() exclusions for new upstream code 1eaad64
@mdomsch mdomsch Merge branch 'master' into merge ad1f8cc
@mdomsch mdomsch add --delay-updates option c42c3f2
@mdomsch mdomsch finish merge 2dfe4a6
@mdomsch mdomsch Handle hardlinks and duplicate files
Minimize uploads in sync local->remote by looking for existing same
files elsewhere in remote destination and do an S3 COPY command
instead of uploading the file again.

We now store the (locally generated) md5 of the file in the
x-amz-meta-s3cmd-attrs metadata, because we can't count on the ETag
being correct due to multipart uploads.  Use this value if it's
available.

This also reduces the number of local stat() calls made by
recording more useful information during the inital
os.walk().  This cuts the number of stat()s in half.
264ef82
@mdomsch mdomsch hardlink/copy fix
If remote doesn't have any copies of the file, we transfer one
instance first, then copy thereafter.  But we were dereferencing the
destination list improperly in this case, causing a crash.  This patch
fixes the crash cleanly.
a6e43c4
@mdomsch mdomsch remote_copy() doesn't need to know dst_list anymore cdf25f9
@mdomsch mdomsch handle remote->local transfers with local hardlink/copy if possible
Reworked some of the hardlink / same file detection code to be a
little more general purpose.  Now it can be used to detect duplicate
files on either remote or local side.

When transferring remote->local, if we already have a copy (same
md5sum) of a file locally that we would otherwise transfer, don't
transfer, but hardlink it.  Should hardlink not be avaialble (e.g. on
Windows), use shutil.copy2() instead.  This lets us avoid the second
download completely.

_get_filelist_local() grew an initial list argument.  This lets us
avoid copying / merging / updating a bunch of different lists back
into one - it starts as one list and grows.  Much cleaner (and the
fact these were separate cost me several hours of debugging to track
down why something would get set, like the by_md5 hash, only to have
it be empty shortly thereafter.
f881b16
@mdomsch mdomsch sync: add --add-destination, parallelize uploads to multiple destinat…
…ions

Only meaningful at present in the sync local->remote(s) case, this
adds the --add-destination <foo> command line option.  For the last
arg (the traditional destination), and each destination specified via
--add-destination, fork and upload after the initial walk of the local
file system has completed (and done all the disk I/O to calculate md5
values for each file).

This keeps us from pounding the file system doing (the same) disk I/O
for each possible destination, and allows full use of our bandwidth to
upload in parallel.
7de0789
@mdomsch mdomsch add local tree MD5 caching
This creates and maintains a cache (aka HashCache) of each inode in
the local tree.  This is used to avoid doing local disk I/O to
calculate an MD5 value for a file if it's inode, mtime, and size
haven't changed.  If these values have changed, then it does the disk
I/O.

This introduces command line option --cache-file <foo>.  The file is
created if it does not exist, is read upon start and written upon
close. The contents are only useful for a given directory tree, so
caches should not be reused for different directory tree syncs.
11e5755
@mdomsch mdomsch HashCache: add missing break during purge 0d0b339
@mludvig mludvig merged commit 0d0b339 into s3tools:master Feb 19, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment