Fix consistency for add()/remove() operations that are divided into chunks due to the number of rows being updated #2000
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When consistent modify/delete operations get chunked due to exceeding the default number of operations (200) in Collection.modify(), the additional chunks will now be flagged with with "isAdditionalChunk" so that consistent operations are never applied multiple times on the sync server.
We already make sure that if the where()...modify() operation results in zero matches locally, we still sync the operation to the server in case it would hit a result there, which could always be a possibility.
The thing we haven't taken care of is if the modify or delete operation resulted in multiple chunks into DBCore, resulting in multiple DBOperations synced to Dexie Cloud Server, the server did apply the condition and its updates multiple times. This is not an issue for idempotent operations such as adding to a set or deletions but mathematical add/subtract is not idempotent.
If a query such as: db.people.where('name').startsWith('A').modify({
age: add(1)
});
...and there happens to be 1000 local people matching the criteria, those people would end up getting their age added not with 1 but with 5 if modifyChunkSize is 200 since the operation would result in 5 mutations of 200 each - all with criteria and changeSpec in them. When reaching the server, it would ignore the changes that the client computed and instead run the criteria on its database and execute the addition 5 times - one for each chunk. With this commit, all but the first chunk will be flagged with isAdditionalChunk=true, making the server only execute the consistent operation on the initial chunk and ignore the rest. However, the keys of the remaining chunks are still important information for server, so are the local results that came out from it.