-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lib/model: Newly created conflicting files can be silently overwritten #3742
Comments
Yeah it causes a bunch of other issues too, I think we already have an issue for this (or atleast relating to this). |
I think we check for unexpected conflicts on files we know exist, but not for files we don't know exist. |
Just looking at this, a couple of thoughts come up:
|
I don't think it needs to be that bad. We need to to a stat before replacing, which we already do if the destination file is in the index (i.e., almost always). There will still be a window between us checking and the rename actually happening, but that's at least down to the millisecond range. |
Ah, I actually hadn't known Syncthing calls stat() just before replacing the destination file. If that's the case, then that pretty much negates my argument about performance (and thus the argument for this being configurable). |
always use `currentFolderFile()`'s version which is more up-to-date than cached version in `sendReceiveFolder` struct
syncthing#3742) GitHub-Pull-Request: syncthing#4317
Syncthing overwrote a conflicting file during a sync without noticing the conflict.
I expected Syncthing to notice the file it was about to overwrite had not yet been seen through a local scan and either rename it, or create the new file with a "conflict" filename.
Syncthing Version: v0.4.11 both sides
OS Version: Windows 10 / FreeBSD 9.3 / Fedora 24
Steps:
Machine B overwrites file X without warning since it hasn't discovered it via a scan yet.
I've tested this between a few different OSes in different directions.
Other observations:
It is slightly contrived I know, but it is more likely when using longer rescan intervals to conserve battery / IOPS. There is no safety net if multiple machines create a file with the same name within a rescan interval.
The text was updated successfully, but these errors were encountered: