-
Notifications
You must be signed in to change notification settings - Fork 215
-
Notifications
You must be signed in to change notification settings - Fork 215
Another database corruption problem? #247
Comments
Thanks for reporting! Seems like you attract these types of issues. Like before, can you provide database files and/or local databases and full log files? I know it's a lot to ask every time, but it's the only way to solve these types of issues. |
Sure, i will package the three DB. Strangely, i have the simplest usage pattern. I no longer use the daemon to avoid problem, no conccurent up/down and I just use it to pass a savegame back and forth between office and home so the transaction count per day is really low (2-3 each side). |
After briefly looking at this, it looks like this is closely related to #231. But I think this one will be much easier to debug. Thanks for providing the log. Possible/Likely reason:
|
I continued analyzing this issue. I still feel that we (possibly @pimotte and I) should talk again about the way that cleanup does things right now to avoid these types of issues entirely instead of fixing one issue after the other. In particular, I would like to discuss if it makes sense to let the CleanupOperating create some kind of snapshot -- a point in time that all clients respect as status quo. Meaning: Database versions that conflict with other database versions that were already part of a cleanup will never conflict. Said differently: A database that existed when a cleanup happend will never disappear or lose a conflict. Good idea? However, until then here are a few things I discovered: (a) The checksum causing problems ( @pimotte Not sure if this is correct: Did we want the merging depend on whether the total number of database files has been exceeded, or the per-client number of databases. So syncany/syncany-lib/src/main/java/org/syncany/operations/cleanup/CleanupOperation.java Line 342 in dde514d
(b) The evil checksum (
|
@ snapshots: sounds like an exceptional idea. I have a feeling this is what we have been overlooking, and it already worked this way in my mind. This will require changes to the core algorithms, but having a better defined and more consistent state seems to be worth that. @ number: We currently check the total number of databasefiles, but allow to scale linearly with the number of clients. Eg. max 5 files per client, 17 db-A files causes a cleanup when there are 3 clients, but not when there are 4 clients. |
@ snapshots: I think we should have a chat about that on IRC -- whenever you're available, obviously. That is not something I feel comfortable to implement without brainstorming about the how first. @ number: Ah yes, I remember. I adjusted the log message to reflect that. |
Continued analysis. No conclusion yet, still understanding the issue:
|
Found it. The That is all fine if there wasn't issue #252 -- meaning that PURGE databases are entirely lost while merging, i.e. after a merge, a PURGE database is simple an empty database. |
I merged this and created a new branch for #231 that does not contain this fix so I can test before/after here: https://github.com/syncany/syncany/tree/bugfix/issue231 |
Setup : One home computer, one work computer and a s3 repository. Only command line sync.
On the work computer, i am trying to sync a change made on the home computer.
I get this error.
C:\Seb-CloudGame5>sy down
Error: Cannot determine file content for checksum 3d5cbe8caa5a6150a5eee4cca2cc3692c11063e5
Refer to help page using '--help'.
Content of the syncany log
http://tny.cz/fe326ee7
If i try to connect a new folder to the repo using "sy connect / sy down" the down sync fail on the following error.
C:\SEB-CloudGame51>sy down
Error: Checksums do not match: actual 49ecf7f0c83b6830e1d5732a4a8a04c2c01731e6 != expected 69924232a1e9fd71a1451cfcf5696cf72a
3fc684
Refer to help page using '--help'.
The text was updated successfully, but these errors were encountered: