Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A new build system that uses CMake #2

Closed
kostja opened this issue Nov 16, 2010 · 0 comments
Closed

A new build system that uses CMake #2

kostja opened this issue Nov 16, 2010 · 0 comments
Milestone

Comments

@kostja
Copy link
Contributor

kostja commented Nov 16, 2010

To help porting of the tarantool to other platforms, we should start using CMake to build the project.

@ghost ghost assigned kostja Feb 13, 2013
@kostja kostja closed this as completed Feb 18, 2013
delamonpansie referenced this issue in delamonpansie/octopus Feb 26, 2014
Enable `#pattern` without the `-DLUAJIT_ENABLE_LUA52COMPAT` compile flag.
rtsisyk added a commit that referenced this issue Jun 5, 2014
Tarantool support three modes:

1. A script filename is passed to binary or server started using #!.
Tarantool read file contents and execute code as a whole chunk.
2. stdin is not a tty. Tarantool read stdin until EOF and execute
code as a whole chunk.
3. stdin is a tty. Tarantool starts in interactive mode and
processes console input line by line using dostring().

Please see the difference between #2 and #3.
kostja added a commit that referenced this issue Jul 24, 2014
@avid avid mentioned this issue Nov 1, 2014
tsafin added a commit that referenced this issue Dec 25, 2020
Now it's trying to serialize.

NB! A lot of copy-paste (with slight refactoring)
    from walker.c. Eventually they will be merged
    back to the common code.
tsafin added a commit to tsafin/tarantool that referenced this issue Jul 9, 2021
* Forgot to include msgpackffi.lua
tsafin added a commit to tsafin/tarantool that referenced this issue Jul 27, 2021
- simplified datetime cdata creation;
- moved helpers to utils.c
tsafin added a commit to tsafin/tarantool that referenced this issue Sep 1, 2021
drakonhg pushed a commit that referenced this issue Sep 2, 2021
Lord-KA added a commit to Lord-KA/tarantool that referenced this issue May 21, 2023
Gumix added a commit to Gumix/tarantool that referenced this issue Aug 23, 2023
part_count was checked in index_def_check(), which was called too late.
Before that check:
1. `malloc(sizeof(*part_def) * part_count)` can fail for huge part_count;
2. key_def_new() can crash for zero part_count because of out of bound
   access in:

NO_WRAP
   - tarantool#1 key_def_contains_sequential_parts (def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:26
   - tarantool#2 key_def_set_extract_func (key_def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:442
   - tarantool#3 key_def_set_func (def=0x5555561a2ef0) at src/box/key_def.c:162
   - tarantool#4 key_def_new (parts=0x7fffc4001350, part_count=0, for_func_index=false) at src/box/key_def.c:320
NO_WRAP

Closes tarantool#8689

NO_DOC=bugfix
Gumix added a commit to Gumix/tarantool that referenced this issue Aug 23, 2023
part_count was checked in index_def_check(), which was called too late.
Before that check:
1. `malloc(sizeof(*part_def) * part_count)` can fail for huge part_count;
2. key_def_new() can crash for zero part_count because of out of bound
   access in:

NO_WRAP
   - tarantool#1 key_def_contains_sequential_parts (def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:26
   - tarantool#2 key_def_set_extract_func (key_def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:442
   - tarantool#3 key_def_set_func (def=0x5555561a2ef0) at src/box/key_def.c:162
   - tarantool#4 key_def_new (parts=0x7fffc4001350, part_count=0, for_func_index=false) at src/box/key_def.c:320
NO_WRAP

Closes tarantool#8688

NO_DOC=bugfix
Gumix added a commit to Gumix/tarantool that referenced this issue Aug 23, 2023
part_count was checked in index_def_check(), which was called too late.
Before that check:
1. `malloc(sizeof(*part_def) * part_count)` can fail for huge part_count;
2. key_def_new() can crash for zero part_count because of out of bound
   access in:

NO_WRAP
   - tarantool#1 key_def_contains_sequential_parts (def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:26
   - tarantool#2 key_def_set_extract_func (key_def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:442
   - tarantool#3 key_def_set_func (def=0x5555561a2ef0) at src/box/key_def.c:162
   - tarantool#4 key_def_new (parts=0x7fffc4001350, part_count=0, for_func_index=false) at src/box/key_def.c:320
NO_WRAP

Closes tarantool#8688

NO_DOC=bugfix
locker pushed a commit that referenced this issue Aug 24, 2023
part_count was checked in index_def_check(), which was called too late.
Before that check:
1. `malloc(sizeof(*part_def) * part_count)` can fail for huge part_count;
2. key_def_new() can crash for zero part_count because of out of bound
   access in:

NO_WRAP
   - #1 key_def_contains_sequential_parts (def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:26
   - #2 key_def_set_extract_func (key_def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:442
   - #3 key_def_set_func (def=0x5555561a2ef0) at src/box/key_def.c:162
   - #4 key_def_new (parts=0x7fffc4001350, part_count=0, for_func_index=false) at src/box/key_def.c:320
NO_WRAP

Closes #8688

NO_DOC=bugfix
locker pushed a commit that referenced this issue Aug 24, 2023
part_count was checked in index_def_check(), which was called too late.
Before that check:
1. `malloc(sizeof(*part_def) * part_count)` can fail for huge part_count;
2. key_def_new() can crash for zero part_count because of out of bound
   access in:

NO_WRAP
   - #1 key_def_contains_sequential_parts (def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:26
   - #2 key_def_set_extract_func (key_def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:442
   - #3 key_def_set_func (def=0x5555561a2ef0) at src/box/key_def.c:162
   - #4 key_def_new (parts=0x7fffc4001350, part_count=0, for_func_index=false) at src/box/key_def.c:320
NO_WRAP

Closes #8688

NO_DOC=bugfix

(cherry picked from commit ef9e332)
nshy added a commit to nshy/tarantool that referenced this issue Dec 5, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
WRITE of size 4 at 0x7f654b3b0170 thread T3
    #0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
    tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
    tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
    tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this issue Dec 6, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
WRITE of size 4 at 0x7f654b3b0170 thread T3
    #0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
    tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
    tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
    tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this issue Dec 7, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
WRITE of size 4 at 0x7f654b3b0170 thread T3
    #0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
    tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
    tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
    tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

But after that we also need to temporarily comment freeing applier
threads. The issue is appiler_free goes first and then wal_free. The
first does not correctly free resources. In particular does not destroy
thread endpoint but frees its memory. As a result we got use-after-free
on destroying wal endpoint.

```
==3508646==ERROR: AddressSanitizer: heap-use-after-free on address 0x61b000001a30 at pc 0x5556ff1b08d8 bp 0x7f69cb7f65c0 sp 0x7f69cb7f65b8
WRITE of size 8 at 0x61b000001a30 thread T3
    #0 0x5556ff1b08d7 in rlist_del /home/shiny/dev/tarantool/src/lib/small/include/small/rlist.h:101:19
    tarantool#1 0x5556ff1b08d7 in cbus_endpoint_destroy /home/shiny/dev/tarantool/src/lib/core/cbus.c:256:2
    tarantool#2 0x5556feea1f2c in wal_writer_f /home/shiny/dev/tarantool/src/box/wal.c:1237:2
    tarantool#3 0x5556fea3eb57 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1297:10
    tarantool#4 0x5556ff19af3e in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1160:18
    tarantool#5 0x5556ffb0fbd2 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

0x61b000001a30 is located 1200 bytes inside of 1528-byte region [0x61b000001580,0x61b000001b78)
freed by thread T0 here:
    #0 0x5556fe9ed8a2 in __interceptor_free.part.0 asan_malloc_linux.cpp.o
    tarantool#1 0x5556fee4ef65 in applier_free /home/shiny/dev/tarantool/src/box/applier.cc:2175:3
    tarantool#2 0x5556fedfce01 in box_storage_free() /home/shiny/dev/tarantool/src/box/box.cc:5869:2
    tarantool#3 0x5556fedfce01 in box_free /home/shiny/dev/tarantool/src/box/box.cc:5936:2
    tarantool#4 0x5556fea3cfec in tarantool_free() /home/shiny/dev/tarantool/src/main.cc:575:2
    tarantool#5 0x5556fea3cfec in main /home/shiny/dev/tarantool/src/main.cc:1087:2
    tarantool#6 0x7f69d7445ccf in __libc_start_call_main /usr/src/debug/glibc/glibc/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this issue Dec 7, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
WRITE of size 4 at 0x7f654b3b0170 thread T3
    #0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
    tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
    tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
    tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

But after that we also need to temporarily comment freeing applier
threads. The issue is appiler_free goes first and then wal_free. The
first does not correctly free resources. In particular does not destroy
thread endpoint but frees its memory. As a result we got use-after-free
on destroying wal endpoint.

```
==3508646==ERROR: AddressSanitizer: heap-use-after-free on address 0x61b000001a30 at pc 0x5556ff1b08d8 bp 0x7f69cb7f65c0 sp 0x7f69cb7f65b8
WRITE of size 8 at 0x61b000001a30 thread T3
    #0 0x5556ff1b08d7 in rlist_del /home/shiny/dev/tarantool/src/lib/small/include/small/rlist.h:101:19
    tarantool#1 0x5556ff1b08d7 in cbus_endpoint_destroy /home/shiny/dev/tarantool/src/lib/core/cbus.c:256:2
    tarantool#2 0x5556feea1f2c in wal_writer_f /home/shiny/dev/tarantool/src/box/wal.c:1237:2
    tarantool#3 0x5556fea3eb57 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1297:10
    tarantool#4 0x5556ff19af3e in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1160:18
    tarantool#5 0x5556ffb0fbd2 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

0x61b000001a30 is located 1200 bytes inside of 1528-byte region [0x61b000001580,0x61b000001b78)
freed by thread T0 here:
    #0 0x5556fe9ed8a2 in __interceptor_free.part.0 asan_malloc_linux.cpp.o
    tarantool#1 0x5556fee4ef65 in applier_free /home/shiny/dev/tarantool/src/box/applier.cc:2175:3
    tarantool#2 0x5556fedfce01 in box_storage_free() /home/shiny/dev/tarantool/src/box/box.cc:5869:2
    tarantool#3 0x5556fedfce01 in box_free /home/shiny/dev/tarantool/src/box/box.cc:5936:2
    tarantool#4 0x5556fea3cfec in tarantool_free() /home/shiny/dev/tarantool/src/main.cc:575:2
    tarantool#5 0x5556fea3cfec in main /home/shiny/dev/tarantool/src/main.cc:1087:2
    tarantool#6 0x7f69d7445ccf in __libc_start_call_main /usr/src/debug/glibc/glibc/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this issue Dec 7, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
WRITE of size 4 at 0x7f654b3b0170 thread T3
    #0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
    tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
    tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
    tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

But after that we need to properly destroy endpoints in other threads
too. Check for example ASAN report for applier thread below. The issue
is applier endpoind is linked with wal endpoint and applier endpoint
memory is freed (without proper destroying) when we destroy wal endpoint.

The similar issue is with endpoints in vinyl threads. However we got
SIGSIGEV with them instead of proper ASAN report. Looks like the cause
is vinyl endpoints reside on stack. In case of applier we can just
temporarily comment freeing applier thread memory until proper applier
shutdown for the sake of this patch. But we can't do the same way for
vinyl threads. Let's just stop cbus_loop in both cases. It is not full
shutdown solution not for applier nor for vinyl as both may have running
fibers in threads. It is temporary solution just for this patch. We add
missing pieces in later patches.

```
==3508646==ERROR: AddressSanitizer: heap-use-after-free on address 0x61b000001a30 at pc 0x5556ff1b08d8 bp 0x7f69cb7f65c0 sp 0x7f69cb7f65b8
WRITE of size 8 at 0x61b000001a30 thread T3
    #0 0x5556ff1b08d7 in rlist_del /home/shiny/dev/tarantool/src/lib/small/include/small/rlist.h:101:19
    tarantool#1 0x5556ff1b08d7 in cbus_endpoint_destroy /home/shiny/dev/tarantool/src/lib/core/cbus.c:256:2
    tarantool#2 0x5556feea1f2c in wal_writer_f /home/shiny/dev/tarantool/src/box/wal.c:1237:2
    tarantool#3 0x5556fea3eb57 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1297:10
    tarantool#4 0x5556ff19af3e in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1160:18
    tarantool#5 0x5556ffb0fbd2 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

0x61b000001a30 is located 1200 bytes inside of 1528-byte region [0x61b000001580,0x61b000001b78)
freed by thread T0 here:
    #0 0x5556fe9ed8a2 in __interceptor_free.part.0 asan_malloc_linux.cpp.o
    tarantool#1 0x5556fee4ef65 in applier_free /home/shiny/dev/tarantool/src/box/applier.cc:2175:3
    tarantool#2 0x5556fedfce01 in box_storage_free() /home/shiny/dev/tarantool/src/box/box.cc:5869:2
    tarantool#3 0x5556fedfce01 in box_free /home/shiny/dev/tarantool/src/box/box.cc:5936:2
    tarantool#4 0x5556fea3cfec in tarantool_free() /home/shiny/dev/tarantool/src/main.cc:575:2
    tarantool#5 0x5556fea3cfec in main /home/shiny/dev/tarantool/src/main.cc:1087:2
    tarantool#6 0x7f69d7445ccf in __libc_start_call_main /usr/src/debug/glibc/glibc/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this issue Dec 8, 2023
Here is the issue with replication shutdown. It crashes if bootstrap is
in progress. Bootstrap uses resume_to_state API to wait for applier
state of interest. resume_to_state usually pause applier fiber on
applier stop/off and wakes bootstrap fiber to pass it the error. But if
applier fiber is cancelled like when shutdown then applier_pause returns
immediately. Which leads to the assertion later.

Even if we ignore this assertion somehow then we hit assertion in
bootstrap fiber in applier_wait_for_state as it expects diag set in case
of off/stop state. But diag is get eaten by applier fiber join in
shutdown fiber.

On applier error if there is fiber using resume_to_state API we first
suspend applier fiber and only exit it when applier fiber get canceled
on applier stop. AFAIU this complicated mechanics to keep fiber alive on
errors is only to keep fiber diag for bootstrap fiber. Instead let's
move diag for bootstrap out of fiber. Now we don't need to keep fiber
alive on errors.

The current approach on passing diag has other oddities. Like we won't
finish disconnect on errors immediately. Or have to return from
applier_f -1 or 0 on errors depending on do we expect another fiber to
steal fiber diag or not.

Part of tarantool#8423

NO_TEST=rely on existing tests
NO_CHANGELOG=internal
NO_DOC=internal

Shutdown fiber stack:
```
  tarantool#5  0x00007f84f0c54d26 in __assert_fail (
      assertion=0x564ffd5dfdec "fiber() == applier->fiber",
      file=0x564ffd5dedae "./src/box/applier.cc", line=2809,
      function=0x564ffd5dfdcf "void applier_pause(applier*)") at assert.c:101
  tarantool#6  0x0000564ffd0a0780 in applier_pause (applier=0x564fff30b1e0)
      at /<snap>/dev/tarantool/src/box/applier.cc:2809
  tarantool#7  0x0000564ffd0a08ab in applier_on_state_f (trigger=0x7f84f0780a60,
      event=0x564fff30b1e0) at /<snap>/dev/tarantool/src/box/applier.cc:2845
  tarantool#8  0x0000564ffd20a873 in trigger_run_list (list=0x7f84f0680de0,
      event=0x564fff30b1e0) at /<snap>/tarantool/src/lib/core/trigger.cc:100
  tarantool#9  0x0000564ffd20a991 in trigger_run (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.cc:133
  tarantool#10 0x0000564ffd0945cb in trigger_run_xc (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.h:173
  tarantool#11 0x0000564ffd09689a in applier_set_state (applier=0x564fff30b1e0,
      state=APPLIER_OFF) at /<snap>/tarantool/src/box/applier.cc:83
  tarantool#12 0x0000564ffd0a0313 in applier_stop (applier=0x564fff30b1e0)
      at /<snap>/tarantool/src/box/applier.cc:2749
  tarantool#13 0x0000564ffd08b8c7 in replication_shutdown ()
```

Bootstrap fiber stack:

```
  tarantool#1  0x0000564ffd1e0c95 in fiber_yield_impl (will_switch_back=true)
      at /<snap>/tarantool/src/lib/core/fiber.c:863
  tarantool#2  0x0000564ffd1e0caa in fiber_yield ()
      at /<snap>/tarantool/src/lib/core/fiber.c:870
  tarantool#3  0x0000564ffd1e0f0b in fiber_yield_timeout (delay=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber.c:914
  tarantool#4  0x0000564ffd1ed401 in fiber_cond_wait_timeout
      (c=0x7f84f0780aa8, timeout=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:107
  tarantool#5  0x0000564ffd1ed61c in fiber_cond_wait_deadline
      (c=0x7f84f0780aa8, deadline=3155292512.1470647)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:128
  tarantool#6  0x0000564ffd0a0a00 in applier_wait_for_state(applier_on_state*, double)
      (trigger=0x7f84f0780a60, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2877
  tarantool#7  0x0000564ffd0a0bbb in applier_resume_to_state(applier*, applier_state, double)
      (applier=0x564fff30b1e0, state=APPLIER_JOINED, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2898
  tarantool#8  0x0000564ffd074bf4 in bootstrap_from_master(replica*) (master=0x564fff2b30d0)
      at /<snap>/tarantool/src/box/box.cc:4980
  tarantool#9  0x0000564ffd074eed in bootstrap(bool*) (is_bootstrap_leader=0x7f84f0780b81)
      at /<snap>/tarantool/src/box/box.cc:5081
  tarantool#10 0x0000564ffd07613d in box_cfg_xc() ()
      at /<snap>/tarantool/src/box/box.cc:5427
```
nshy added a commit to nshy/tarantool that referenced this issue Dec 8, 2023
Here is the issue with replication shutdown. It crashes if bootstrap is
in progress. Bootstrap uses resume_to_state API to wait for applier
state of interest. resume_to_state usually pause applier fiber on
applier stop/off and wakes bootstrap fiber to pass it the error. But if
applier fiber is cancelled like when shutdown then applier_pause returns
immediately. Which leads to the assertion later.

Even if we ignore this assertion somehow then we hit assertion in
bootstrap fiber in applier_wait_for_state as it expects diag set in case
of off/stop state. But diag is get eaten by applier fiber join in
shutdown fiber.

On applier error if there is fiber using resume_to_state API we first
suspend applier fiber and only exit it when applier fiber get canceled
on applier stop. AFAIU this complicated mechanics to keep fiber alive on
errors is only to keep fiber diag for bootstrap fiber. Instead let's
move diag for bootstrap out of fiber. Now we don't need to keep fiber
alive on errors.

The current approach on passing diag has other oddities. Like we won't
finish disconnect on errors immediately. Or have to return from
applier_f -1 or 0 on errors depending on do we expect another fiber to
steal fiber diag or not.

Part of tarantool#8423

NO_TEST=rely on existing tests
NO_CHANGELOG=internal
NO_DOC=internal

Shutdown fiber stack:
```
  tarantool#5  0x00007f84f0c54d26 in __assert_fail (
      assertion=0x564ffd5dfdec "fiber() == applier->fiber",
      file=0x564ffd5dedae "./src/box/applier.cc", line=2809,
      function=0x564ffd5dfdcf "void applier_pause(applier*)") at assert.c:101
  tarantool#6  0x0000564ffd0a0780 in applier_pause (applier=0x564fff30b1e0)
      at /<snap>/dev/tarantool/src/box/applier.cc:2809
  tarantool#7  0x0000564ffd0a08ab in applier_on_state_f (trigger=0x7f84f0780a60,
      event=0x564fff30b1e0) at /<snap>/dev/tarantool/src/box/applier.cc:2845
  tarantool#8  0x0000564ffd20a873 in trigger_run_list (list=0x7f84f0680de0,
      event=0x564fff30b1e0) at /<snap>/tarantool/src/lib/core/trigger.cc:100
  tarantool#9  0x0000564ffd20a991 in trigger_run (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.cc:133
  tarantool#10 0x0000564ffd0945cb in trigger_run_xc (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.h:173
  tarantool#11 0x0000564ffd09689a in applier_set_state (applier=0x564fff30b1e0,
      state=APPLIER_OFF) at /<snap>/tarantool/src/box/applier.cc:83
  tarantool#12 0x0000564ffd0a0313 in applier_stop (applier=0x564fff30b1e0)
      at /<snap>/tarantool/src/box/applier.cc:2749
  tarantool#13 0x0000564ffd08b8c7 in replication_shutdown ()
```

Bootstrap fiber stack:

```
  tarantool#1  0x0000564ffd1e0c95 in fiber_yield_impl (will_switch_back=true)
      at /<snap>/tarantool/src/lib/core/fiber.c:863
  tarantool#2  0x0000564ffd1e0caa in fiber_yield ()
      at /<snap>/tarantool/src/lib/core/fiber.c:870
  tarantool#3  0x0000564ffd1e0f0b in fiber_yield_timeout (delay=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber.c:914
  tarantool#4  0x0000564ffd1ed401 in fiber_cond_wait_timeout
      (c=0x7f84f0780aa8, timeout=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:107
  tarantool#5  0x0000564ffd1ed61c in fiber_cond_wait_deadline
      (c=0x7f84f0780aa8, deadline=3155292512.1470647)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:128
  tarantool#6  0x0000564ffd0a0a00 in applier_wait_for_state(applier_on_state*, double)
      (trigger=0x7f84f0780a60, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2877
  tarantool#7  0x0000564ffd0a0bbb in applier_resume_to_state(applier*, applier_state, double)
      (applier=0x564fff30b1e0, state=APPLIER_JOINED, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2898
  tarantool#8  0x0000564ffd074bf4 in bootstrap_from_master(replica*) (master=0x564fff2b30d0)
      at /<snap>/tarantool/src/box/box.cc:4980
  tarantool#9  0x0000564ffd074eed in bootstrap(bool*) (is_bootstrap_leader=0x7f84f0780b81)
      at /<snap>/tarantool/src/box/box.cc:5081
  tarantool#10 0x0000564ffd07613d in box_cfg_xc() ()
      at /<snap>/tarantool/src/box/box.cc:5427
```
nshy added a commit to nshy/tarantool that referenced this issue Dec 12, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

It is simple, we only need to destroy wal endpoint. But then we got
assertion on deleting endpoint as endpoints in other cords are not
destroyed and when we delete wal endpoint we access rlist links which
reside in freed memory of other endpoints. So if we want to cleanup
cleanly we need to stop vinyl loop properly which is reverting the
commit e463128 ("vinyl: cancel reader and writer threads on
shutdown"). So this issue needs more attention. Let's postpone it by
temporary suppressing ASAN issue.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
	WRITE of size 4 at 0x7f654b3b0170 thread T3
	#0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
	tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
	tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
	tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this issue Dec 12, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

It is simple, we only need to destroy wal endpoint. But then we got
assertion on deleting endpoint as endpoints in other cords are not
destroyed and when we delete wal endpoint we access rlist links which
reside in freed memory of other endpoints. So if we want to cleanup
cleanly we need to stop vinyl loop properly which is reverting the
commit e463128 ("vinyl: cancel reader and writer threads on
shutdown"). So this issue needs more attention. Let's postpone it by
temporary suppressing ASAN issue.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
	WRITE of size 4 at 0x7f654b3b0170 thread T3
	#0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
	tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
	tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
	tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this issue Dec 14, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

It is simple, we only need to destroy wal endpoint. But then we got
assertion on deleting endpoint as endpoints in other cords are not
destroyed and when we delete wal endpoint we access rlist links which
reside in freed memory of other endpoints. So if we want to cleanup
cleanly we need to stop vinyl loop properly which is reverting the
commit e463128 ("vinyl: cancel reader and writer threads on
shutdown"). So this issue needs more attention. Let's postpone it by
temporary suppressing ASAN issue.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
	WRITE of size 4 at 0x7f654b3b0170 thread T3
	#0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
	tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
	tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
	tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
locker pushed a commit that referenced this issue Dec 15, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

It is simple, we only need to destroy wal endpoint. But then we got
assertion on deleting endpoint as endpoints in other cords are not
destroyed and when we delete wal endpoint we access rlist links which
reside in freed memory of other endpoints. So if we want to cleanup
cleanly we need to stop vinyl loop properly which is reverting the
commit e463128 ("vinyl: cancel reader and writer threads on
shutdown"). So this issue needs more attention. Let's postpone it by
temporary suppressing ASAN issue.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
	WRITE of size 4 at 0x7f654b3b0170 thread T3
	#0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
	#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
	#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
	#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

Part of #8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this issue Dec 15, 2023
Actually newly introduced 'replication/shutdown_test.lua' has an issue
(all three subtests). The master is shutdown in time but replica
crashes. Below are replica stacks during crash. The story is next.
Master is shutdown and replica gets `SocketError` and sleeps on
reconnect timeout. Now replica is shutdown. `applier_f` loop exits
immediately without waking bootstrap fiber. Next we change state to OFF
in shutdown fiber in `applier_stop` triggering `resume_to_state`
trigger. It calls `applier_pause` that expects fiber to be applier fiber.

So we'd better to wakeup bootstrap fiber in this case. But simple change
to check cancel state after reconnection sleep does not work yet. This
time `applier_pause` exits immediately which is not the way
`resume_to_state` should work. See, if we return 0 from applier fiber
then we clear fiber diag on fiber death which is not expected by
`resume_to_state`. If we return -1, then `resume_to_state` will steal
fiber diag which is not expected by `fiber_join` in `applier_stop`.

TODO (check -1 case in experiment!)

Even if we ignore this assertion somehow then we hit assertion in
bootstrap fiber in applier_wait_for_state as it expects diag set in case
of off/stop state. But diag is get eaten by applier fiber join in
shutdown fiber.

On applier error if there is fiber using resume_to_state API we first
suspend applier fiber and only exit it when applier fiber get canceled
on applier stop. AFAIU this complicated mechanics to keep fiber alive on
errors is only to keep fiber diag for bootstrap fiber. Instead let's
move diag for bootstrap out of fiber. Now we don't need to keep fiber
alive on errors.

The current approach on passing diag has other oddities. Like we won't
finish disconnect on errors immediately. Or have to return from
applier_f -1 or 0 on errors depending on do we expect another fiber to
steal fiber diag or not.

Part of tarantool#8423

NO_TEST=rely on existing tests
NO_CHANGELOG=internal
NO_DOC=internal

Shutdown fiber stack (crash):
```
  tarantool#5  0x00007f84f0c54d26 in __assert_fail (
      assertion=0x564ffd5dfdec "fiber() == applier->fiber",
      file=0x564ffd5dedae "./src/box/applier.cc", line=2809,
      function=0x564ffd5dfdcf "void applier_pause(applier*)") at assert.c:101
  tarantool#6  0x0000564ffd0a0780 in applier_pause (applier=0x564fff30b1e0)
      at /<snap>/dev/tarantool/src/box/applier.cc:2809
  tarantool#7  0x0000564ffd0a08ab in applier_on_state_f (trigger=0x7f84f0780a60,
      event=0x564fff30b1e0) at /<snap>/dev/tarantool/src/box/applier.cc:2845
  tarantool#8  0x0000564ffd20a873 in trigger_run_list (list=0x7f84f0680de0,
      event=0x564fff30b1e0) at /<snap>/tarantool/src/lib/core/trigger.cc:100
  tarantool#9  0x0000564ffd20a991 in trigger_run (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.cc:133
  tarantool#10 0x0000564ffd0945cb in trigger_run_xc (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.h:173
  tarantool#11 0x0000564ffd09689a in applier_set_state (applier=0x564fff30b1e0,
      state=APPLIER_OFF) at /<snap>/tarantool/src/box/applier.cc:83
  tarantool#12 0x0000564ffd0a0313 in applier_stop (applier=0x564fff30b1e0)
      at /<snap>/tarantool/src/box/applier.cc:2749
  tarantool#13 0x0000564ffd08b8c7 in replication_shutdown ()
```

Bootstrap fiber stack:

```
  tarantool#1  0x0000564ffd1e0c95 in fiber_yield_impl (will_switch_back=true)
      at /<snap>/tarantool/src/lib/core/fiber.c:863
  tarantool#2  0x0000564ffd1e0caa in fiber_yield ()
      at /<snap>/tarantool/src/lib/core/fiber.c:870
  tarantool#3  0x0000564ffd1e0f0b in fiber_yield_timeout (delay=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber.c:914
  tarantool#4  0x0000564ffd1ed401 in fiber_cond_wait_timeout
      (c=0x7f84f0780aa8, timeout=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:107
  tarantool#5  0x0000564ffd1ed61c in fiber_cond_wait_deadline
      (c=0x7f84f0780aa8, deadline=3155292512.1470647)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:128
  tarantool#6  0x0000564ffd0a0a00 in applier_wait_for_state(applier_on_state*, double)
      (trigger=0x7f84f0780a60, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2877
  tarantool#7  0x0000564ffd0a0bbb in applier_resume_to_state(applier*, applier_state, double)
      (applier=0x564fff30b1e0, state=APPLIER_JOINED, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2898
  tarantool#8  0x0000564ffd074bf4 in bootstrap_from_master(replica*) (master=0x564fff2b30d0)
      at /<snap>/tarantool/src/box/box.cc:4980
  tarantool#9  0x0000564ffd074eed in bootstrap(bool*) (is_bootstrap_leader=0x7f84f0780b81)
      at /<snap>/tarantool/src/box/box.cc:5081
  tarantool#10 0x0000564ffd07613d in box_cfg_xc() ()
      at /<snap>/tarantool/src/box/box.cc:5427
```
nshy added a commit to nshy/tarantool that referenced this issue Dec 18, 2023
Actually newly introduced 'replication/shutdown_test.lua' has an issue
(all three subtests). The master is shutdown in time but replica
crashes. Below are replica stacks during crash. The story is next.
Master is shutdown and replica gets `SocketError` and sleeps on
reconnect timeout. Now replica is shutdown. `applier_f` loop exits
immediately without waking bootstrap fiber. Next we change state to OFF
in shutdown fiber in `applier_stop` triggering `resume_to_state`
trigger. It calls `applier_pause` that expects fiber to be applier fiber.

So we'd better to wakeup bootstrap fiber in this case. But simple change
to check cancel state after reconnection sleep does not work yet. This
time `applier_pause` exits immediately which is not the way
`resume_to_state` should work. See, if we return 0 from applier fiber
then we clear fiber diag on fiber death which is not expected by
`resume_to_state`. If we return -1, then `resume_to_state` will steal
fiber diag which is not expected by `fiber_join` in `applier_stop`.

TODO (check -1 case in experiment!)

Even if we ignore this assertion somehow then we hit assertion in
bootstrap fiber in applier_wait_for_state as it expects diag set in case
of off/stop state. But diag is get eaten by applier fiber join in
shutdown fiber.

On applier error if there is fiber using resume_to_state API we first
suspend applier fiber and only exit it when applier fiber get canceled
on applier stop. AFAIU this complicated mechanics to keep fiber alive on
errors is only to keep fiber diag for bootstrap fiber. Instead let's
move diag for bootstrap out of fiber. Now we don't need to keep fiber
alive on errors.

The current approach on passing diag has other oddities. Like we won't
finish disconnect on errors immediately. Or have to return from
applier_f -1 or 0 on errors depending on do we expect another fiber to
steal fiber diag or not.

Part of tarantool#8423

NO_TEST=rely on existing tests
NO_CHANGELOG=internal
NO_DOC=internal

Shutdown fiber stack (crash):
```
  tarantool#5  0x00007f84f0c54d26 in __assert_fail (
      assertion=0x564ffd5dfdec "fiber() == applier->fiber",
      file=0x564ffd5dedae "./src/box/applier.cc", line=2809,
      function=0x564ffd5dfdcf "void applier_pause(applier*)") at assert.c:101
  tarantool#6  0x0000564ffd0a0780 in applier_pause (applier=0x564fff30b1e0)
      at /<snap>/dev/tarantool/src/box/applier.cc:2809
  tarantool#7  0x0000564ffd0a08ab in applier_on_state_f (trigger=0x7f84f0780a60,
      event=0x564fff30b1e0) at /<snap>/dev/tarantool/src/box/applier.cc:2845
  tarantool#8  0x0000564ffd20a873 in trigger_run_list (list=0x7f84f0680de0,
      event=0x564fff30b1e0) at /<snap>/tarantool/src/lib/core/trigger.cc:100
  tarantool#9  0x0000564ffd20a991 in trigger_run (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.cc:133
  tarantool#10 0x0000564ffd0945cb in trigger_run_xc (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.h:173
  tarantool#11 0x0000564ffd09689a in applier_set_state (applier=0x564fff30b1e0,
      state=APPLIER_OFF) at /<snap>/tarantool/src/box/applier.cc:83
  tarantool#12 0x0000564ffd0a0313 in applier_stop (applier=0x564fff30b1e0)
      at /<snap>/tarantool/src/box/applier.cc:2749
  tarantool#13 0x0000564ffd08b8c7 in replication_shutdown ()
```

Bootstrap fiber stack:

```
  tarantool#1  0x0000564ffd1e0c95 in fiber_yield_impl (will_switch_back=true)
      at /<snap>/tarantool/src/lib/core/fiber.c:863
  tarantool#2  0x0000564ffd1e0caa in fiber_yield ()
      at /<snap>/tarantool/src/lib/core/fiber.c:870
  tarantool#3  0x0000564ffd1e0f0b in fiber_yield_timeout (delay=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber.c:914
  tarantool#4  0x0000564ffd1ed401 in fiber_cond_wait_timeout
      (c=0x7f84f0780aa8, timeout=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:107
  tarantool#5  0x0000564ffd1ed61c in fiber_cond_wait_deadline
      (c=0x7f84f0780aa8, deadline=3155292512.1470647)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:128
  tarantool#6  0x0000564ffd0a0a00 in applier_wait_for_state(applier_on_state*, double)
      (trigger=0x7f84f0780a60, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2877
  tarantool#7  0x0000564ffd0a0bbb in applier_resume_to_state(applier*, applier_state, double)
      (applier=0x564fff30b1e0, state=APPLIER_JOINED, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2898
  tarantool#8  0x0000564ffd074bf4 in bootstrap_from_master(replica*) (master=0x564fff2b30d0)
      at /<snap>/tarantool/src/box/box.cc:4980
  tarantool#9  0x0000564ffd074eed in bootstrap(bool*) (is_bootstrap_leader=0x7f84f0780b81)
      at /<snap>/tarantool/src/box/box.cc:5081
  tarantool#10 0x0000564ffd07613d in box_cfg_xc() ()
      at /<snap>/tarantool/src/box/box.cc:5427
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant