Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upIgnore records from SQS which where seen in S3 #306
Conversation
|
Could it be that 2 records have exactly the same |
|
it would be good to have test coverage for this. https://github.com/brave/sync/blob/staging/test/s3Helper.js |
@SergeyZhukovsky timestamp is the most close field by purpose to unique identify the record. I hadn't seen the case when timestampts are exactly the same, but in theory that can rare happen. objectId is not suitable for this because it identifies an object and not the operation. I take timestampt from Key, which looks like |
@diracdeltas thanks, looking how to make a testcase |
|
closed this pr because there is a better way |
|
I had removed commit |
|
@SergeyZhukovsky @diracdeltas @darkdh Fixed according to the previous notices: |
Ignore records from SQS which where seen in S3
AlexeyBarabash commentedApr 26, 2019
There is a situation when sync lib sends duplicating records, first from S3 and then from SQS, it may happens if syncing bookmarks early after establishing a sync chain:
This breaks merging.
On brave-browser this was workarounded with brave/brave-core#2016 (Don't send bookmarks) , but this seems not enough.
This PR does ignores records from SQS which were seen as from S3 by unique timestampt. If these records are processed twice in between "legal" records, this may cause mess.
STR for issue using brave-syncer branch:
folder Aandbookmark sync