Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flaky test: upgrade/upgrade.test.lua #338

Closed
Gerold103 opened this issue Jun 1, 2022 · 0 comments
Closed

flaky test: upgrade/upgrade.test.lua #338

Gerold103 opened this issue Jun 1, 2022 · 0 comments
Assignees

Comments

@Gerold103
Copy link
Collaborator

Sometimes storage_1_b doesn't upgrade its schema.

2022-05-23T22:13:09.5933864Z upgrade/upgrade.test.lua                                        [ fail ]
2022-05-23T22:13:09.5934141Z 
2022-05-23T22:13:09.5935780Z Test failed! Result content mismatch:
2022-05-23T22:13:09.5942229Z --- upgrade/upgrade.result	Mon May 23 22:08:57 2022
2022-05-23T22:13:09.5942946Z +++ /home/runner/work/vshard/vshard/test/var/rejects/upgrade/upgrade.reject	Mon May 23 22:13:09 2022
2022-05-23T22:13:09.5943896Z @@ -180,11 +180,10 @@
2022-05-23T22:13:09.5944189Z   | ...
2022-05-23T22:13:09.5944557Z  box.space._schema:get({'vshard_version'})
2022-05-23T22:13:09.5944879Z   | ---
2022-05-23T22:13:09.5989957Z - | - ['vshard_version', 0, 1, 16, 0]
2022-05-23T22:13:09.5991391Z   | ...
2022-05-23T22:13:09.5991745Z  vshard.storage.internal.schema_current_version()
2022-05-23T22:13:09.5993548Z   | ---
2022-05-23T22:13:09.5994264Z - | - '{0.1.16.0}'
2022-05-23T22:13:09.5995001Z + | - '{0.1.15.0}'
2022-05-23T22:13:09.5995692Z   | ...
2022-05-23T22:13:09.5996520Z  vshard.storage.internal.schema_latest_version
2022-05-23T22:13:09.5997201Z   | ---
2022-05-23T22:13:09.5997914Z 
2022-05-23T22:13:09.5998270Z Last 15 lines of Tarantool Log file [Instance "box"][/home/runner/work/vshard/vshard/test/var/001_upgrade/box.log]:
2022-05-23T22:13:09.5999144Z 2022-05-23 22:13:07.730 [6394] main/101/box I> assigned id 1 to replica 5e6cfaff-6501-40ce-933e-d3da41253e71
2022-05-23T22:13:09.5999889Z 2022-05-23 22:13:07.730 [6394] main/101/box I> cluster uuid 4d77faa2-08ec-45c0-8070-30db701e2a4b
2022-05-23T22:13:09.6000668Z 2022-05-23 22:13:07.732 [6394] snapshot/101/main I> saving snapshot `/home/runner/work/vshard/vshard/test/var/001_upgrade/box/00000000000000000000.snap.inprogress'
2022-05-23T22:13:09.6001154Z 2022-05-23 22:13:07.734 [6394] snapshot/101/main I> done
2022-05-23T22:13:09.6002603Z 2022-05-23 22:13:07.734 [6394] main/101/box I> ready to accept requests
2022-05-23T22:13:09.6003335Z 2022-05-23 22:13:07.734 [6394] main/108/checkpoint_daemon I> started
2022-05-23T22:13:09.6004180Z 2022-05-23 22:13:07.734 [6394] main/108/checkpoint_daemon I> scheduled the next snapshot at Mon May 23 23:52:50 2022
2022-05-23T22:13:09.6018114Z 2022-05-23 22:13:07.735 [6394] main/113/console/::1:12142 I> started
2022-05-23T22:13:09.6018564Z 2022-05-23 22:13:07.735 [6394] main C> entering the event loop
2022-05-23T22:13:09.6018878Z Previous HEAD position was e42d3e3 doc: create 0.1.20 changelog
2022-05-23T22:13:09.6019229Z HEAD is now at 79a4dbf Improve compatibility with 1.9
2022-05-23T22:13:09.6019792Z 2022-05-23 22:13:08.296 [6394] main/115/console/::1:47504 I> Waiting until slaves are connected to a master
2022-05-23T22:13:09.6020355Z 2022-05-23 22:13:08.301 [6394] main/115/console/::1:47504 I> Slaves are connected to a master "storage_1_a"
2022-05-23T22:13:09.6020905Z 2022-05-23 22:13:08.301 [6394] main/115/console/::1:47504 I> Waiting until slaves are connected to a master
2022-05-23T22:13:09.6021446Z 2022-05-23 22:13:08.406 [6394] main/115/console/::1:47504 I> Slaves are connected to a master "storage_2_a"
2022-05-23T22:13:09.6021826Z Reproduce file /home/runner/work/vshard/vshard/test/var/reproduce/001_upgrade.list.yaml
2022-05-23T22:13:09.6022156Z ---
2022-05-23T22:13:09.6022442Z - [upgrade/upgrade.test.lua, null]
2022-05-23T22:13:09.6022672Z ...

Logs don't tell much. Happens on 1.10, I could only reproduce it in CI, disappears after some re-runs.

@sergos sergos added the teamS Scaling label Jul 12, 2022
@Gerold103 Gerold103 self-assigned this Aug 1, 2022
Gerold103 added a commit that referenced this issue Aug 1, 2022
The replica (storage_1_b) sometimes didn't have time to receive
the schema upgrade from the master (storage_1_a). The fix is to
wait for it explicitly.

Closes #338
Gerold103 added a commit that referenced this issue Aug 8, 2022
The replica (storage_1_b) sometimes didn't have time to receive
the schema upgrade from the master (storage_1_a). The fix is to
wait for it explicitly.

Closes #338
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants