-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change find to ls -r in 30[0]_create_extlinux.sh #1161
Comments
In general regarding using 'ls' versus 'find' # Here things like 'find /path/to/dir -name '*.tar.gz' | sort' are used because # one cannot use bash globbing via commands like 'ls /path/to/dir/*.tar.gz' # because /usr/sbin/rear sets the nullglob bash option which leads to plain 'ls' # when '/path/to/dir/*.tar.gz' matches nothing (i.e. when no backup file exists) # so that then plain 'ls' would result nonsense. I.e. before calling ls -r rear/*/*/syslinux.cfg one would need to ensure that "rear/*/*/syslinux.cfg" Comparison how much time is actually saved real 0m0.007s user 0m0.006s sys 0m0.001s with "ls -r" real 0m0.018s user 0m0.015s sys 0m0.002s Surprise! But even if it was faster with "ls -r" |
@lrirwin |
Yes -- each night these run:
rear -v savelayout
rear -vdD mkbackup
That creates folders like:
$BUILD_DIR/outputfs/rear/$HOSTNAME/20170110.2215/backup
The entire linux system is backed up into that folder.
I've succeeded in being able to incorporate --link-dest into ReaR's
rsync method, so there may be as many as 15 folders on any given media.
Each of them is a complete system backup, so there are thousands of
entries in each backup folder.
ls -r runs a lot faster than using find and it gives them in reverse
order as well...
BTW - this is the --link-dest solution I came up with in
/usr/share/rear/backup/NETFS/default/50_make_backup.sh at line 90 where
the rsync option is:
===
(rsync)
# make sure that the target is a directory
mkdir -p $v "$backuparchive" >&2
LINKDEST=`ls -d $BUILD_DIR/outputfs/rear/$HOSTNAME/*/backup/bin 2>/dev/null | tail -n 1 | cut -f1-8 -d"/"`
case $LINKDEST in
"") LINKDESTOPT="";;
*) LINKDESTOPT="--link-dest=$LINKDEST";;
esac
Log $BACKUP_PROG $v "${BACKUP_RSYNC_OPTIONS[@]}" --one-file-system
--delete \
--exclude-from=$TMP_DIR/backup-exclude.txt --delete-excluded
${LINKDESTOPT} \
$(cat $TMP_DIR/backup-include.txt) "$backuparchive"
$BACKUP_PROG $v "${BACKUP_RSYNC_OPTIONS[@]}" --one-file-system
--delete \
--exclude-from=$TMP_DIR/backup-exclude.txt --delete-excluded
${LINKDESTOPT} \
$(cat $TMP_DIR/backup-include.txt) "$backuparchive"
;;
===
Saves a ton of space on backup media...
…On 01/11/2017 06:01 AM, Johannes Meixner wrote:
@lrirwin <https://github.com/lrirwin>
can you explain what you mean with
"backup folders contain thousands of files"?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1161 (comment)>, or
mute the thread
<https://github.com/notifications/unsubscribe-auth/ABZTrFk9CV7El8dl9DUvStHAPHxcrkRQks5rRLafgaJpZM4LfuZ5>.
|
By guessing from your comments it seems I really helps to provide the information as requested in Furthermore preferably provide your proposed changes Finally see |
rear version (/usr/sbin/rear -V): 1.18 forward
Brief description of the issue:
When "backup" folders exist, using "find" takes a long time.
Suggest this change at line 164 in output/USB/Linux-i386/30[0]_create_extlinux.sh:
This is much quicker when the backup folders contain thousands of files.
It also accomplishes part of the TODO mentioned in the script.
Thanks for all you do!
The text was updated successfully, but these errors were encountered: