Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix consistency for add()/remove() operations that are divided into chunks due to the number of rows being updated #2000

Merged
merged 1 commit into from
May 26, 2024

Conversation

dfahlander
Copy link
Collaborator

When consistent modify/delete operations get chunked due to exceeding the default number of operations (200) in Collection.modify(), the additional chunks will now be flagged with with "isAdditionalChunk" so that consistent operations are never applied multiple times on the sync server.

We already make sure that if the where()...modify() operation results in zero matches locally, we still sync the operation to the server in case it would hit a result there, which could always be a possibility.

The thing we haven't taken care of is if the modify or delete operation resulted in multiple chunks into DBCore, resulting in multiple DBOperations synced to Dexie Cloud Server, the server did apply the condition and its updates multiple times. This is not an issue for idempotent operations such as adding to a set or deletions but mathematical add/subtract is not idempotent.

If a query such as: db.people.where('name').startsWith('A').modify({
age: add(1)
});
...and there happens to be 1000 local people matching the criteria, those people would end up getting their age added not with 1 but with 5 if modifyChunkSize is 200 since the operation would result in 5 mutations of 200 each - all with criteria and changeSpec in them. When reaching the server, it would ignore the changes that the client computed and instead run the criteria on its database and execute the addition 5 times - one for each chunk. With this commit, all but the first chunk will be flagged with isAdditionalChunk=true, making the server only execute the consistent operation on the initial chunk and ignore the rest. However, the keys of the remaining chunks are still important information for server, so are the local results that came out from it.

…chunks will be flagged with with "isAdditionalChunk" so that consistent operations are never applied multiple times on the sync server.

For delete operations, this is only an optimisation, since executing the deletions several times based on given criteria is not an error.

For modify operations, this was not an issue until we added PropModifications.

PropModifications can make mathematical addition and subtraction and those operations must not be done twice if consistency shall be waterproof.

If a query such as: db.people.where('name').startsWith('A').modify({
  age: add(1)
});
...and there happens to be 1000 local people matching the criteria, those people would end up getting their age added not with 1 but with 5 if modifyChunkSize is 200 since the operation would result in 5 mutations of 200 each - all with criteria and changeSpec in them. When reaching the server, it would ignore the changes that the client computed and instead run the criteria on its database and execute the addition 5 times - one for each chunk. With this commit, all but the first chunk will be flagged with isAdditionalChunk=true, making the server only execute the consistent operation on the initial chunk and ignore the rest. However, the keys of the remaining chunks are still important information for server, so are the local results that came out from it
@dfahlander dfahlander merged commit 076dc15 into master May 26, 2024
5 checks passed
@dfahlander dfahlander deleted the consistent-bulk-ops branch May 26, 2024 13:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant