-
-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Users might start upwards replication from 0 after upgrade #6920
Comments
) Related ticket: #6920 On this commit: Given that Terser doesn't provide a way to select pieces of code that it should not minify and any other solution was substantially more complex or risky, overloading toString seems like the lesser of most evils. Adds e2e test to check the filter function for an offline user. Fixes some problems in other e2e tests
The fix has been merged into the angular migration branch. |
) Related ticket: #6920 On this commit: Given that Terser doesn't provide a way to select pieces of code that it should not minify and any other solution was substantially more complex or risky, overloading toString seems like the lesser of most evils. Adds e2e test to check the filter function for an offline user. Fixes some problems in other e2e tests
) Related ticket: #6920 On this commit: Given that Terser doesn't provide a way to select pieces of code that it should not minify and any other solution was substantially more complex or risky, overloading toString seems like the lesser of most evils. Adds e2e test to check the filter function for an offline user. Fixes some problems in other e2e tests
@dianabarsan I have some questions before starting. The repro steps are for a dev env. I am looking to recreate this in our test environment. To understand it a bit better. If the filter function was modified. Would that cause the user to start over? In the case of testing in medic-os. I can use different braches but if the statement about modifying is true will I need to be sure that the random branch I upgrade from didn't have a change to the filter function? |
When you upgrade from 3.10 to 3.11, it will happen, even with this fix (because the system will compare the old filter function with the new, and they'll be different). In 3.10 and prior, editing the filter function would most definitely trigger a re-sync. We had control over that (and could avoid editing it), but the act editing other code could change the minified version of the filter function, without us having any control over it. From 3.11 onward, even editing the filter function should not trigger a re-sync. |
I see, my thought then is to create a branch off master. Then upgrade from master to the newly created branch. It shouldn't re-sync since it has the latest changes. |
Cool! |
I followed the upgrade as described above. Watching the network tab on my offline user. I see two Is there a header that sets the batch size? I don't see it is why I'm asking. |
In our slack disucssion, this was expected to happen since the meta db needed to sync. The Medic db didn't have a revs_diff call to update. I'm closing this as it is already merged. |
Describe the bug
Due to PouchDB internal algorithms and non-deterministic minification results (either from Uglify-js used pre 3.11 or Terser used post 3.11), users could restart upwards replication from 0 after an upgrade.
To Reproduce
Reproducing is pretty tricky as it involves changing some code in way that will change certain variable names in the minified source. I've been pretty successfull in unwillingly triggering this in the migration branch. I'll add a comment if I find an 100% reliable way, but for now:
_revs_diff
requests (the batch size is 100, so if your user has 500 docs, you should see 5_revs_diff
requests).Expected behavior
Users should not start upwards replication from 0. Seeing one _revs_diff request is normal - and should contain references to the latest docs that the user has downloaded from the server: probably only the service-worker-cache.
Environment
Additional context
PouchDB generates a (hopefully) unique and stable replication ID for each replication, and uses some of the replication config options into compiling it.
https://github.com/pouchdb/pouchdb/blob/1db1700382604ab0dc3ddb337096d99641ea12d8/packages/node_modules/pouchdb-generate-replication-id/src/index.js#L15
One of the config options that is used to generate a replication id is the provided filter function. PouchDb calls the provided value (be it a function, or a string)
toString
method to get a value to use.The result of the
toString
call is not deterministic, because the actual code that runs is minified for production use.So, if between two versions, the minified string representation of the filter function changes, even if no changes have been made to the function itself, the newly generated replication ids will be different from the existent and cause a restart in replication.
Fortunately, upwards replication is relatively cheap, as it involves hitting an unfiltered CouchDB endpoint (
_revs_diff
) with every pair of doc _id + _rev that exists on the user's device. For the overwhelming majority of cases, responses from the server will be empty (because the docs already exist on the server as well).The text was updated successfully, but these errors were encountered: