Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tarantool do self bootstrap in read_only mode if failed to any replica when `replication_connect_quorum' is 0 #4423

Closed
rtokarev opened this issue Aug 12, 2019 · 2 comments

Comments

@rtokarev
Copy link
Contributor

Tarantool version: 1.10.3-106-g4faa103

OS version: CentOS 6

Bug description:

When `replication_connect_quorum' is configured to 0, tarantool do self bootstrap (creates own snapshot) even if configured read only. It violates the read only principe.

Steps to reproduce:

[root@st-drbd1 1]# tarantool
Tarantool 1.10.3-106-g4faa103
type 'help' for interactive help
tarantool> box.cfg{ read_only = true, replication = { '0:12345' }, replication_connect_quorum = 0, replication_connect_timeout = 1 }
2019-08-12 18:40:20.072 [17248] main/101/interactive C> Tarantool 1.10.3-106-g4faa103
2019-08-12 18:40:20.072 [17248] main/101/interactive C> log level 5
2019-08-12 18:40:20.072 [17248] main/101/interactive I> mapping 268435456 bytes for memtx tuple arena...
2019-08-12 18:40:20.073 [17248] main/101/interactive I> mapping 134217728 bytes for vinyl tuple arena...
2019-08-12 18:40:20.083 [17248] main/101/interactive I> instance uuid 674ba58c-ebcc-4bfb-a886-acd828b41133
2019-08-12 18:40:20.083 [17248] main/101/interactive I> connecting to 1 replicas
2019-08-12 18:40:20.084 [17248] main/105/applier/0:12345 I> can't connect to master
2019-08-12 18:40:20.084 [17248] main/105/applier/0:12345 coio.cc:106 !> SystemError connect, called on fd 11, aka 127.0.0.1:39484: Connection refused
2019-08-12 18:40:20.084 [17248] main/105/applier/0:12345 I> will retry every 1.00 second
2019-08-12 18:40:21.084 [17248] main/101/interactive C> failed to connect to 1 out of 1 replicas
2019-08-12 18:40:21.084 [17248] main/101/interactive I> initializing an empty data directory
2019-08-12 18:40:21.152 [17248] main/101/interactive I> assigned id 1 to replica 674ba58c-ebcc-4bfb-a886-acd828b41133
2019-08-12 18:40:21.152 [17248] main/101/interactive I> cluster uuid d564b319-061f-4be6-9e45-a325aa524889
2019-08-12 18:40:21.166 [17248] snapshot/101/main I> saving snapshot `./00000000000000000000.snap.inprogress'
2019-08-12 18:40:21.211 [17248] snapshot/101/main I> done
2019-08-12 18:40:21.212 [17248] main/101/interactive I> ready to accept requests
2019-08-12 18:40:21.214 [17248] main/109/checkpoint_daemon I> started
2019-08-12 18:40:21.214 [17248] main/109/checkpoint_daemon I> scheduled the next snapshot at Mon Aug 12 20:23:53 2019
2019-08-12 18:40:21.214 [17248] main/101/interactive I> set 'read_only' configuration option to true
---
...

tarantool> [root@st-drbd1 1]# ls -l
total 8
-rw-r--r-- 1 root root 1611 Aug 12 18:40 00000000000000000000.snap
-rw-r--r-- 1 root root  102 Aug 12 18:40 00000000000000000000.xlog
@kostja
Copy link
Contributor

kostja commented Aug 12, 2019

This was "fixed" by #3428
So this is apparently not a bug.

@kyukhin
Copy link
Contributor

kyukhin commented Aug 23, 2019

Closing then.

@kyukhin kyukhin closed this as completed Aug 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants