You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When stateful failover enabled and all instance of some non all-rw replicaset are expelled. It's not able for user to add new instances in that replicaset
Enable stateful failover with etcd2 state provider, and ensure it works
Find out replicaset_uuid of storage-1 and remember it
Set weight=0 for storage-1
Wait until all buckets will be moved to storage-0
Disable storage-1-1
Expel storage-1-1
Disable storage-1-0
Expel storage-1-0
Ensure that replicaset storage-1 is no more present in topology
Create two new unconfigured instances from scratch with same advertise uris as expelled instances
Try to join them with roles: vshard-storage; all-rw = false; weight=100 and replicaset_uuid that was found at step 4
Actual result:
1 of instances falls into "OperationError" state, the second instance stuck at "BootstrappingBox" state.
Both instances are read-only and cannot apply configuration
2022-07-27 13:13:19.195 [14] main/147/remote_control/10.244.0.57:44388 confapplier.lua:133 E> Instance entering failed state: ConfiguringRoles -> OperationError
ApplyConfigError: Can't modify data on a read-only instance - box.cfg.read_only is true
stack traceback:
builtin/box/schema.lua:3037: in function 'create'
/usr/share/tarantool/kv/app/roles/storage.lua:32: in function </usr/share/tarantool/kv/app/roles/storage.lua:4>
[C]: in function 'xpcall'
/usr/share/tarantool/kv/.rocks/share/tarantool/errors.lua:145: in function 'pcall'
.../tarantool/kv/.rocks/share/tarantool/cartridge/roles.lua:365: in function 'apply_config'
...tool/kv/.rocks/share/tarantool/cartridge/confapplier.lua:282: in function <...tool/kv/.rocks/share/tarantool/cartridge/confapplier.lua:244>
[C]: in function 'xpcall'
/usr/share/tarantool/kv/.rocks/share/tarantool/errors.lua:145: in function </usr/share/tarantool/kv/.rocks/share/tarantool/errors.lua:139>
[C]: in function 'pcall'
...l/kv/.rocks/share/tarantool/cartridge/remote-control.lua:72: in function 'fn'
...l/kv/.rocks/share/tarantool/cartridge/remote-control.lua:139: in function <...l/kv/.rocks/share/tarantool/cartridge/remote-control.lua:132>
2022-07-27 13:13:19.195 [14] main/141/remote_control/10.244.0.57:44388 utils.c:463 E> LuajitError: builtin/socket.lua:88: attempt to use closed socket
...
2022-07-27 13:15:02.419 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:03.421 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:04.423 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:05.426 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:06.431 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:07.435 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:08.437 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:09.440 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:10.443 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
2022-07-27 13:15:11.446 [14] main/174/main box.cc:217 E> ER_READONLY: Can't modify data on a read-only instance - box.cfg.read_only is true
Expected result:
Both instance joins cluster without errors and starts normally
Important notices:
After expelling all instance from replicaset (step 11), an entry about leader is still present in etcd and in require('cartridge.vars').new('cartridge.roles.coordinator').client.session on coordinator instance
Clearing of etcd /leaders and coordinator memory does nothing
Bug is not reproducible on all-rw replicasets
Bug is not reproducible with eventual or disabled failover
The text was updated successfully, but these errors were encountered:
Description
When stateful failover enabled and all instance of some non all-rw replicaset are expelled. It's not able for user to add new instances in that replicaset
Tarantool version:
Cartridge version:
Steps to reproduce:
Actual result:
Expected result:
Both instance joins cluster without errors and starts normally
Important notices:
require('cartridge.vars').new('cartridge.roles.coordinator').client.session
on coordinator instance/leaders
and coordinator memory does nothingThe text was updated successfully, but these errors were encountered: