You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems like it's doing something smart (namely, detecting whether files have changed and copying only what it has to). But the description in the man page makes it sound like it's doing something very dumb:
-f | --force
re-creates a file if it is already present.
To me, "re-creating" the file sounds like it clobbers the existing version and starts over from scratch, which would be really time consuming, especially if the data is being transferred over the internet. So I have been scared of using this feature to resume/validate partial or completed datasets.
However though trial-and-error I'm seeing that when copying existing/identical files with -f, the rate seems to be limited only by my disk I/O speed at the source, which I infer is because it is checksumming the files. Or maybe disk caches just work way better than I ever imagined...
So could someone please confirm, which is it? If it's doing the smart thing, that is really a nice feature and should be advertised in the man file and --help output!
The text was updated successfully, but these errors were encountered:
The force option does essentially the same thing as the "-f" option on the cp command. If the destination file exists it is deleted and overwritten. Otherwise, xrdcp would complain that the file exists. The xrdcp command has no feature to copy partial files or complete a partially copied file.
It seems like it's doing something smart (namely, detecting whether files have changed and copying only what it has to). But the description in the man page makes it sound like it's doing something very dumb:
To me, "re-creating" the file sounds like it clobbers the existing version and starts over from scratch, which would be really time consuming, especially if the data is being transferred over the internet. So I have been scared of using this feature to resume/validate partial or completed datasets.
However though trial-and-error I'm seeing that when copying existing/identical files with
-f
, the rate seems to be limited only by my disk I/O speed at the source, which I infer is because it is checksumming the files. Or maybe disk caches just work way better than I ever imagined...So could someone please confirm, which is it? If it's doing the smart thing, that is really a nice feature and should be advertised in the man file and
--help
output!The text was updated successfully, but these errors were encountered: