New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weird things happen when deleting just created files #2211
Comments
A rescan on the node which dwleted them should fix it. |
I rescanned all the nodes, but I can't see any changes.
Clicking on failed items pop up a message like this:
The following items could not be synchronized. They are retried
automatically and will be synced when the error is resolved.
M10.py no such file or directory M3.py no such file or directory
..............................................
|
Do you have steps how to reproduce this reliably? |
Probably syncing latency is involved.
1) create file
2) modify the file
--cluster receive notification but is not yet synced
3) file is deleted before propagating the changes
4) nodes are searching/pulling for changes of a deleted file
Otherwise I should say that it happens randomly
|
Can you actually reproduce it with the steps above? |
Also, restarting the originator node should fix this. |
I restarted all nodes and now only 2 are online.
Originator has:
global state: 33915
local state: 33889
out of sync 26 items 0 byte?
Backup node has:
global state: 33915
local state: 33918 ??
out of sync 3 items 384 byte
this is happening only for the folder that I modify more frequently
|
Is one of the nodes a master? |
None is master.
I modify the files only from the "originator" I have other originator but
it is turned off from months.
The problem is quite recent.
I just upgraded to v0.11.22 but it is always out of sync.
I use "Ignore Permissions" on every node
|
Are you using a translated version and actually mean "master" when you say "originator"? |
I guessed the interpretation of a previous question
originator: the node pushing the changes
only to give an idea about the roles of the nodes
|
I deleted the most out of sync node, accepted and readded the folder.
Now everything is in sync.
|
I had this exact error between 2 linux systems. Syncthing was running as user. I copied files to the users sync folder as root, giving the user rw access via user group. User could rw the files but owner was root. Syncthing gave this message on the other device. I deleted the files again and error went away. Put them back and set file ownership as user and synced up ok. I did not give it a second thought afterwards.... |
Yes, better if syncthing runs with the user that normally modify the files.
Otherwise you should use the group or others privilege, not a simply way.
|
I seem to have encountered the same issue (i.e. devices reporting "Out of Sync", and 0b files, after adding and immediately deleting files into ~/Sync). I have st 0.11.25 installed on computers A, B, C (all debian testing). On A I add a folder "current" to ~/Sync with 40 Mb data. Then, after B and C start pulling, I realize it's a wrong folder, so I remove it. Finally (I'm not sure if this part is important for what follows), I add another file to ~/Sync on A. Result is that all three machines report being "out of sync", and all three report the other two to be "up to date". ~/Sync on A looks the way it should look. ~/Sync on B and C have additionally folder "current" with files in it, but all files have size 0. Machine C keeps reporting (and presumably B as well, but I haven't figured out how to check it) mesasges of the sort: [WUZ2F] 00:30:59 INFO: Puller (folder "default", dir "current/Dyson"): delete: remove /home/luke/Sync/current/Dyson: directory not empty [WUZ2F] 00:30:59 INFO: Puller (folder "default", dir "current"): delete: remove /home/luke/Sync/current: directory not empty [WUZ2F] 00:30:59 INFO: Folder "default" isn't making progress. Pausing puller for 1m0s. then it pauses for 1m and repeats reporting the same messages. If now I add files to ~/Sync on either A, B, or C, they seem to be syncing fine (although all three still report that they are out of sync and that other two machiens are up to date). Addendum: 1) when I removed the folders "current" from B and C, then all three machines, without restarting st, report being up to date (and report both other machines being up to date). |
Reproducible 100% on a slow link. Not sure it is a bug as such - better than immediately deleted folders coming back I suppose. Fixed on the fly by addendum 1. There is possibly some logic to prevent this happening. Only happens when a folder is deleted before it is synced. |
I have also noticed that if I use syncthing to transfer files from A to B and delete them from A before they get to B, they will persistently show up in B's "Out of Sync Items" list while it is transferring another file. Restarting B seems to get rid of these stale entries. The stale entries alone do not put B into an "Out of Sync" state. |
I think rescan on A should fix it. |
Is this actually a bug? If so, what is it - is it the inotify interaction or something? It seems to be a natural consequence of changes happening (the delete) and not being scanned yet. |
I changed the way I update the files, since that, I didn't experience the
same problem again.
Probably it's really a natural consequence not managed.
|
"they will persistently show up in B's Out of Sync Items list while it is transferring another file" Does this mean they no longer persist once B is done transferring? If so then I'm pretty sure I've seen this before myself, but clears itself up once syncing is 'finished'. |
I had to manually remove files to get all computers report "in sync", which to me is a bug. I'm now running 12.* series, so no point redoing the test, but I plan very soon to upgrade to 14.* after which I'll report if the issue is still there. |
Still relevant? Seems to be stale. |
Yes it is stable, I had not others problems.
|
During my activity, sometimes I delete just created files.
Now I have an "Out of Sync" with 26 items failed.
Every file is in "no such file or directory"
The text was updated successfully, but these errors were encountered: