Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FreeBSD]: port files. #4

Merged
merged 1 commit into from
Aug 3, 2012
Merged

[FreeBSD]: port files. #4

merged 1 commit into from
Aug 3, 2012

Conversation

zloidemon
Copy link
Contributor

Please add it for FreeBSD users.

kostja added a commit that referenced this pull request Aug 3, 2012
@kostja kostja merged commit ac0fedb into tarantool:master Aug 3, 2012
delamonpansie referenced this pull request in delamonpansie/octopus Feb 26, 2014
Typo fix and prettyfication
@avid avid mentioned this pull request Nov 1, 2014
zloidemon added a commit that referenced this pull request Mar 24, 2015
@mialinx mialinx mentioned this pull request Apr 3, 2015
@kostja kostja mentioned this pull request Apr 24, 2015
@a0s a0s mentioned this pull request Oct 6, 2015
@YadrovSergey YadrovSergey mentioned this pull request Feb 8, 2016
@void234 void234 mentioned this pull request Dec 28, 2020
tsafin added a commit that referenced this pull request Dec 28, 2020
More correct handling of expressions inside of select
drakonhg pushed a commit that referenced this pull request Sep 2, 2021
locker pushed a commit that referenced this pull request Aug 24, 2023
part_count was checked in index_def_check(), which was called too late.
Before that check:
1. `malloc(sizeof(*part_def) * part_count)` can fail for huge part_count;
2. key_def_new() can crash for zero part_count because of out of bound
   access in:

NO_WRAP
   - #1 key_def_contains_sequential_parts (def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:26
   - #2 key_def_set_extract_func (key_def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:442
   - #3 key_def_set_func (def=0x5555561a2ef0) at src/box/key_def.c:162
   - #4 key_def_new (parts=0x7fffc4001350, part_count=0, for_func_index=false) at src/box/key_def.c:320
NO_WRAP

Closes #8688

NO_DOC=bugfix
locker pushed a commit that referenced this pull request Aug 24, 2023
part_count was checked in index_def_check(), which was called too late.
Before that check:
1. `malloc(sizeof(*part_def) * part_count)` can fail for huge part_count;
2. key_def_new() can crash for zero part_count because of out of bound
   access in:

NO_WRAP
   - #1 key_def_contains_sequential_parts (def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:26
   - #2 key_def_set_extract_func (key_def=0x5555561a2ef0) at src/box/tuple_extract_key.cc:442
   - #3 key_def_set_func (def=0x5555561a2ef0) at src/box/key_def.c:162
   - #4 key_def_new (parts=0x7fffc4001350, part_count=0, for_func_index=false) at src/box/key_def.c:320
NO_WRAP

Closes #8688

NO_DOC=bugfix

(cherry picked from commit ef9e332)
nshy added a commit to nshy/tarantool that referenced this pull request Dec 7, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
WRITE of size 4 at 0x7f654b3b0170 thread T3
    #0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
    tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
    tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
    tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

But after that we also need to temporarily comment freeing applier
threads. The issue is appiler_free goes first and then wal_free. The
first does not correctly free resources. In particular does not destroy
thread endpoint but frees its memory. As a result we got use-after-free
on destroying wal endpoint.

```
==3508646==ERROR: AddressSanitizer: heap-use-after-free on address 0x61b000001a30 at pc 0x5556ff1b08d8 bp 0x7f69cb7f65c0 sp 0x7f69cb7f65b8
WRITE of size 8 at 0x61b000001a30 thread T3
    #0 0x5556ff1b08d7 in rlist_del /home/shiny/dev/tarantool/src/lib/small/include/small/rlist.h:101:19
    tarantool#1 0x5556ff1b08d7 in cbus_endpoint_destroy /home/shiny/dev/tarantool/src/lib/core/cbus.c:256:2
    tarantool#2 0x5556feea1f2c in wal_writer_f /home/shiny/dev/tarantool/src/box/wal.c:1237:2
    tarantool#3 0x5556fea3eb57 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1297:10
    tarantool#4 0x5556ff19af3e in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1160:18
    tarantool#5 0x5556ffb0fbd2 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

0x61b000001a30 is located 1200 bytes inside of 1528-byte region [0x61b000001580,0x61b000001b78)
freed by thread T0 here:
    #0 0x5556fe9ed8a2 in __interceptor_free.part.0 asan_malloc_linux.cpp.o
    tarantool#1 0x5556fee4ef65 in applier_free /home/shiny/dev/tarantool/src/box/applier.cc:2175:3
    tarantool#2 0x5556fedfce01 in box_storage_free() /home/shiny/dev/tarantool/src/box/box.cc:5869:2
    tarantool#3 0x5556fedfce01 in box_free /home/shiny/dev/tarantool/src/box/box.cc:5936:2
    tarantool#4 0x5556fea3cfec in tarantool_free() /home/shiny/dev/tarantool/src/main.cc:575:2
    tarantool#5 0x5556fea3cfec in main /home/shiny/dev/tarantool/src/main.cc:1087:2
    tarantool#6 0x7f69d7445ccf in __libc_start_call_main /usr/src/debug/glibc/glibc/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this pull request Dec 7, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
WRITE of size 4 at 0x7f654b3b0170 thread T3
    #0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
    tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
    tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
    tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

But after that we also need to temporarily comment freeing applier
threads. The issue is appiler_free goes first and then wal_free. The
first does not correctly free resources. In particular does not destroy
thread endpoint but frees its memory. As a result we got use-after-free
on destroying wal endpoint.

```
==3508646==ERROR: AddressSanitizer: heap-use-after-free on address 0x61b000001a30 at pc 0x5556ff1b08d8 bp 0x7f69cb7f65c0 sp 0x7f69cb7f65b8
WRITE of size 8 at 0x61b000001a30 thread T3
    #0 0x5556ff1b08d7 in rlist_del /home/shiny/dev/tarantool/src/lib/small/include/small/rlist.h:101:19
    tarantool#1 0x5556ff1b08d7 in cbus_endpoint_destroy /home/shiny/dev/tarantool/src/lib/core/cbus.c:256:2
    tarantool#2 0x5556feea1f2c in wal_writer_f /home/shiny/dev/tarantool/src/box/wal.c:1237:2
    tarantool#3 0x5556fea3eb57 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1297:10
    tarantool#4 0x5556ff19af3e in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1160:18
    tarantool#5 0x5556ffb0fbd2 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

0x61b000001a30 is located 1200 bytes inside of 1528-byte region [0x61b000001580,0x61b000001b78)
freed by thread T0 here:
    #0 0x5556fe9ed8a2 in __interceptor_free.part.0 asan_malloc_linux.cpp.o
    tarantool#1 0x5556fee4ef65 in applier_free /home/shiny/dev/tarantool/src/box/applier.cc:2175:3
    tarantool#2 0x5556fedfce01 in box_storage_free() /home/shiny/dev/tarantool/src/box/box.cc:5869:2
    tarantool#3 0x5556fedfce01 in box_free /home/shiny/dev/tarantool/src/box/box.cc:5936:2
    tarantool#4 0x5556fea3cfec in tarantool_free() /home/shiny/dev/tarantool/src/main.cc:575:2
    tarantool#5 0x5556fea3cfec in main /home/shiny/dev/tarantool/src/main.cc:1087:2
    tarantool#6 0x7f69d7445ccf in __libc_start_call_main /usr/src/debug/glibc/glibc/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this pull request Dec 7, 2023
We may need to cancel fiber that waits for cord to finish. For this
purpose let's cancel fiber started by cord_costart inside the cord.

Note that there is a race between stopping cancel_event in cord and
triggering it using ev_async_send in joining thread. AFAIU it is safe.

We also need to fix stopping wal cord to address stack-use-after-return
issue shown below. Is arises because we did not stop async which resides
in wal endpoint and endpoint resides on stack. Later when we stop the
introduced cancel_event we access not stopped async which at this moment
gone out of scope.

```
==3224698==ERROR: AddressSanitizer: stack-use-after-return on address 0x7f654b3b0170 at pc 0x555a2817c282 bp 0x7f654ca55b30 sp 0x7f654ca55b28
WRITE of size 4 at 0x7f654b3b0170 thread T3
    #0 0x555a2817c281 in ev_async_stop /home/shiny/dev/tarantool/third_party/libev/ev.c:5492:37
    tarantool#1 0x555a27827738 in cord_thread_func /home/shiny/dev/tarantool/src/lib/core/fiber.c:1990:2
    tarantool#2 0x7f65574aa9ea in start_thread /usr/src/debug/glibc/glibc/nptl/pthread_create.c:444:8
    tarantool#3 0x7f655752e7cb in clone3 /usr/src/debug/glibc/glibc/misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
```

But after that we need to properly destroy endpoints in other threads
too. Check for example ASAN report for applier thread below. The issue
is applier endpoind is linked with wal endpoint and applier endpoint
memory is freed (without proper destroying) when we destroy wal endpoint.

The similar issue is with endpoints in vinyl threads. However we got
SIGSIGEV with them instead of proper ASAN report. Looks like the cause
is vinyl endpoints reside on stack. In case of applier we can just
temporarily comment freeing applier thread memory until proper applier
shutdown for the sake of this patch. But we can't do the same way for
vinyl threads. Let's just stop cbus_loop in both cases. It is not full
shutdown solution not for applier nor for vinyl as both may have running
fibers in threads. It is temporary solution just for this patch. We add
missing pieces in later patches.

```
==3508646==ERROR: AddressSanitizer: heap-use-after-free on address 0x61b000001a30 at pc 0x5556ff1b08d8 bp 0x7f69cb7f65c0 sp 0x7f69cb7f65b8
WRITE of size 8 at 0x61b000001a30 thread T3
    #0 0x5556ff1b08d7 in rlist_del /home/shiny/dev/tarantool/src/lib/small/include/small/rlist.h:101:19
    tarantool#1 0x5556ff1b08d7 in cbus_endpoint_destroy /home/shiny/dev/tarantool/src/lib/core/cbus.c:256:2
    tarantool#2 0x5556feea1f2c in wal_writer_f /home/shiny/dev/tarantool/src/box/wal.c:1237:2
    tarantool#3 0x5556fea3eb57 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1297:10
    tarantool#4 0x5556ff19af3e in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1160:18
    tarantool#5 0x5556ffb0fbd2 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

0x61b000001a30 is located 1200 bytes inside of 1528-byte region [0x61b000001580,0x61b000001b78)
freed by thread T0 here:
    #0 0x5556fe9ed8a2 in __interceptor_free.part.0 asan_malloc_linux.cpp.o
    tarantool#1 0x5556fee4ef65 in applier_free /home/shiny/dev/tarantool/src/box/applier.cc:2175:3
    tarantool#2 0x5556fedfce01 in box_storage_free() /home/shiny/dev/tarantool/src/box/box.cc:5869:2
    tarantool#3 0x5556fedfce01 in box_free /home/shiny/dev/tarantool/src/box/box.cc:5936:2
    tarantool#4 0x5556fea3cfec in tarantool_free() /home/shiny/dev/tarantool/src/main.cc:575:2
    tarantool#5 0x5556fea3cfec in main /home/shiny/dev/tarantool/src/main.cc:1087:2
    tarantool#6 0x7f69d7445ccf in __libc_start_call_main /usr/src/debug/glibc/glibc/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
```

Part of tarantool#8423

NO_DOC=internal
NO_CHANGELOG=internal
nshy added a commit to nshy/tarantool that referenced this pull request Dec 8, 2023
Here is the issue with replication shutdown. It crashes if bootstrap is
in progress. Bootstrap uses resume_to_state API to wait for applier
state of interest. resume_to_state usually pause applier fiber on
applier stop/off and wakes bootstrap fiber to pass it the error. But if
applier fiber is cancelled like when shutdown then applier_pause returns
immediately. Which leads to the assertion later.

Even if we ignore this assertion somehow then we hit assertion in
bootstrap fiber in applier_wait_for_state as it expects diag set in case
of off/stop state. But diag is get eaten by applier fiber join in
shutdown fiber.

On applier error if there is fiber using resume_to_state API we first
suspend applier fiber and only exit it when applier fiber get canceled
on applier stop. AFAIU this complicated mechanics to keep fiber alive on
errors is only to keep fiber diag for bootstrap fiber. Instead let's
move diag for bootstrap out of fiber. Now we don't need to keep fiber
alive on errors.

The current approach on passing diag has other oddities. Like we won't
finish disconnect on errors immediately. Or have to return from
applier_f -1 or 0 on errors depending on do we expect another fiber to
steal fiber diag or not.

Part of tarantool#8423

NO_TEST=rely on existing tests
NO_CHANGELOG=internal
NO_DOC=internal

Shutdown fiber stack:
```
  tarantool#5  0x00007f84f0c54d26 in __assert_fail (
      assertion=0x564ffd5dfdec "fiber() == applier->fiber",
      file=0x564ffd5dedae "./src/box/applier.cc", line=2809,
      function=0x564ffd5dfdcf "void applier_pause(applier*)") at assert.c:101
  tarantool#6  0x0000564ffd0a0780 in applier_pause (applier=0x564fff30b1e0)
      at /<snap>/dev/tarantool/src/box/applier.cc:2809
  tarantool#7  0x0000564ffd0a08ab in applier_on_state_f (trigger=0x7f84f0780a60,
      event=0x564fff30b1e0) at /<snap>/dev/tarantool/src/box/applier.cc:2845
  tarantool#8  0x0000564ffd20a873 in trigger_run_list (list=0x7f84f0680de0,
      event=0x564fff30b1e0) at /<snap>/tarantool/src/lib/core/trigger.cc:100
  tarantool#9  0x0000564ffd20a991 in trigger_run (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.cc:133
  tarantool#10 0x0000564ffd0945cb in trigger_run_xc (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.h:173
  tarantool#11 0x0000564ffd09689a in applier_set_state (applier=0x564fff30b1e0,
      state=APPLIER_OFF) at /<snap>/tarantool/src/box/applier.cc:83
  tarantool#12 0x0000564ffd0a0313 in applier_stop (applier=0x564fff30b1e0)
      at /<snap>/tarantool/src/box/applier.cc:2749
  tarantool#13 0x0000564ffd08b8c7 in replication_shutdown ()
```

Bootstrap fiber stack:

```
  tarantool#1  0x0000564ffd1e0c95 in fiber_yield_impl (will_switch_back=true)
      at /<snap>/tarantool/src/lib/core/fiber.c:863
  tarantool#2  0x0000564ffd1e0caa in fiber_yield ()
      at /<snap>/tarantool/src/lib/core/fiber.c:870
  tarantool#3  0x0000564ffd1e0f0b in fiber_yield_timeout (delay=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber.c:914
  tarantool#4  0x0000564ffd1ed401 in fiber_cond_wait_timeout
      (c=0x7f84f0780aa8, timeout=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:107
  tarantool#5  0x0000564ffd1ed61c in fiber_cond_wait_deadline
      (c=0x7f84f0780aa8, deadline=3155292512.1470647)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:128
  tarantool#6  0x0000564ffd0a0a00 in applier_wait_for_state(applier_on_state*, double)
      (trigger=0x7f84f0780a60, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2877
  tarantool#7  0x0000564ffd0a0bbb in applier_resume_to_state(applier*, applier_state, double)
      (applier=0x564fff30b1e0, state=APPLIER_JOINED, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2898
  tarantool#8  0x0000564ffd074bf4 in bootstrap_from_master(replica*) (master=0x564fff2b30d0)
      at /<snap>/tarantool/src/box/box.cc:4980
  tarantool#9  0x0000564ffd074eed in bootstrap(bool*) (is_bootstrap_leader=0x7f84f0780b81)
      at /<snap>/tarantool/src/box/box.cc:5081
  tarantool#10 0x0000564ffd07613d in box_cfg_xc() ()
      at /<snap>/tarantool/src/box/box.cc:5427
```
nshy added a commit to nshy/tarantool that referenced this pull request Dec 8, 2023
Here is the issue with replication shutdown. It crashes if bootstrap is
in progress. Bootstrap uses resume_to_state API to wait for applier
state of interest. resume_to_state usually pause applier fiber on
applier stop/off and wakes bootstrap fiber to pass it the error. But if
applier fiber is cancelled like when shutdown then applier_pause returns
immediately. Which leads to the assertion later.

Even if we ignore this assertion somehow then we hit assertion in
bootstrap fiber in applier_wait_for_state as it expects diag set in case
of off/stop state. But diag is get eaten by applier fiber join in
shutdown fiber.

On applier error if there is fiber using resume_to_state API we first
suspend applier fiber and only exit it when applier fiber get canceled
on applier stop. AFAIU this complicated mechanics to keep fiber alive on
errors is only to keep fiber diag for bootstrap fiber. Instead let's
move diag for bootstrap out of fiber. Now we don't need to keep fiber
alive on errors.

The current approach on passing diag has other oddities. Like we won't
finish disconnect on errors immediately. Or have to return from
applier_f -1 or 0 on errors depending on do we expect another fiber to
steal fiber diag or not.

Part of tarantool#8423

NO_TEST=rely on existing tests
NO_CHANGELOG=internal
NO_DOC=internal

Shutdown fiber stack:
```
  tarantool#5  0x00007f84f0c54d26 in __assert_fail (
      assertion=0x564ffd5dfdec "fiber() == applier->fiber",
      file=0x564ffd5dedae "./src/box/applier.cc", line=2809,
      function=0x564ffd5dfdcf "void applier_pause(applier*)") at assert.c:101
  tarantool#6  0x0000564ffd0a0780 in applier_pause (applier=0x564fff30b1e0)
      at /<snap>/dev/tarantool/src/box/applier.cc:2809
  tarantool#7  0x0000564ffd0a08ab in applier_on_state_f (trigger=0x7f84f0780a60,
      event=0x564fff30b1e0) at /<snap>/dev/tarantool/src/box/applier.cc:2845
  tarantool#8  0x0000564ffd20a873 in trigger_run_list (list=0x7f84f0680de0,
      event=0x564fff30b1e0) at /<snap>/tarantool/src/lib/core/trigger.cc:100
  tarantool#9  0x0000564ffd20a991 in trigger_run (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.cc:133
  tarantool#10 0x0000564ffd0945cb in trigger_run_xc (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.h:173
  tarantool#11 0x0000564ffd09689a in applier_set_state (applier=0x564fff30b1e0,
      state=APPLIER_OFF) at /<snap>/tarantool/src/box/applier.cc:83
  tarantool#12 0x0000564ffd0a0313 in applier_stop (applier=0x564fff30b1e0)
      at /<snap>/tarantool/src/box/applier.cc:2749
  tarantool#13 0x0000564ffd08b8c7 in replication_shutdown ()
```

Bootstrap fiber stack:

```
  tarantool#1  0x0000564ffd1e0c95 in fiber_yield_impl (will_switch_back=true)
      at /<snap>/tarantool/src/lib/core/fiber.c:863
  tarantool#2  0x0000564ffd1e0caa in fiber_yield ()
      at /<snap>/tarantool/src/lib/core/fiber.c:870
  tarantool#3  0x0000564ffd1e0f0b in fiber_yield_timeout (delay=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber.c:914
  tarantool#4  0x0000564ffd1ed401 in fiber_cond_wait_timeout
      (c=0x7f84f0780aa8, timeout=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:107
  tarantool#5  0x0000564ffd1ed61c in fiber_cond_wait_deadline
      (c=0x7f84f0780aa8, deadline=3155292512.1470647)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:128
  tarantool#6  0x0000564ffd0a0a00 in applier_wait_for_state(applier_on_state*, double)
      (trigger=0x7f84f0780a60, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2877
  tarantool#7  0x0000564ffd0a0bbb in applier_resume_to_state(applier*, applier_state, double)
      (applier=0x564fff30b1e0, state=APPLIER_JOINED, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2898
  tarantool#8  0x0000564ffd074bf4 in bootstrap_from_master(replica*) (master=0x564fff2b30d0)
      at /<snap>/tarantool/src/box/box.cc:4980
  tarantool#9  0x0000564ffd074eed in bootstrap(bool*) (is_bootstrap_leader=0x7f84f0780b81)
      at /<snap>/tarantool/src/box/box.cc:5081
  tarantool#10 0x0000564ffd07613d in box_cfg_xc() ()
      at /<snap>/tarantool/src/box/box.cc:5427
```
nshy added a commit to nshy/tarantool that referenced this pull request Dec 15, 2023
Actually newly introduced 'replication/shutdown_test.lua' has an issue
(all three subtests). The master is shutdown in time but replica
crashes. Below are replica stacks during crash. The story is next.
Master is shutdown and replica gets `SocketError` and sleeps on
reconnect timeout. Now replica is shutdown. `applier_f` loop exits
immediately without waking bootstrap fiber. Next we change state to OFF
in shutdown fiber in `applier_stop` triggering `resume_to_state`
trigger. It calls `applier_pause` that expects fiber to be applier fiber.

So we'd better to wakeup bootstrap fiber in this case. But simple change
to check cancel state after reconnection sleep does not work yet. This
time `applier_pause` exits immediately which is not the way
`resume_to_state` should work. See, if we return 0 from applier fiber
then we clear fiber diag on fiber death which is not expected by
`resume_to_state`. If we return -1, then `resume_to_state` will steal
fiber diag which is not expected by `fiber_join` in `applier_stop`.

TODO (check -1 case in experiment!)

Even if we ignore this assertion somehow then we hit assertion in
bootstrap fiber in applier_wait_for_state as it expects diag set in case
of off/stop state. But diag is get eaten by applier fiber join in
shutdown fiber.

On applier error if there is fiber using resume_to_state API we first
suspend applier fiber and only exit it when applier fiber get canceled
on applier stop. AFAIU this complicated mechanics to keep fiber alive on
errors is only to keep fiber diag for bootstrap fiber. Instead let's
move diag for bootstrap out of fiber. Now we don't need to keep fiber
alive on errors.

The current approach on passing diag has other oddities. Like we won't
finish disconnect on errors immediately. Or have to return from
applier_f -1 or 0 on errors depending on do we expect another fiber to
steal fiber diag or not.

Part of tarantool#8423

NO_TEST=rely on existing tests
NO_CHANGELOG=internal
NO_DOC=internal

Shutdown fiber stack (crash):
```
  tarantool#5  0x00007f84f0c54d26 in __assert_fail (
      assertion=0x564ffd5dfdec "fiber() == applier->fiber",
      file=0x564ffd5dedae "./src/box/applier.cc", line=2809,
      function=0x564ffd5dfdcf "void applier_pause(applier*)") at assert.c:101
  tarantool#6  0x0000564ffd0a0780 in applier_pause (applier=0x564fff30b1e0)
      at /<snap>/dev/tarantool/src/box/applier.cc:2809
  tarantool#7  0x0000564ffd0a08ab in applier_on_state_f (trigger=0x7f84f0780a60,
      event=0x564fff30b1e0) at /<snap>/dev/tarantool/src/box/applier.cc:2845
  tarantool#8  0x0000564ffd20a873 in trigger_run_list (list=0x7f84f0680de0,
      event=0x564fff30b1e0) at /<snap>/tarantool/src/lib/core/trigger.cc:100
  tarantool#9  0x0000564ffd20a991 in trigger_run (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.cc:133
  tarantool#10 0x0000564ffd0945cb in trigger_run_xc (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.h:173
  tarantool#11 0x0000564ffd09689a in applier_set_state (applier=0x564fff30b1e0,
      state=APPLIER_OFF) at /<snap>/tarantool/src/box/applier.cc:83
  tarantool#12 0x0000564ffd0a0313 in applier_stop (applier=0x564fff30b1e0)
      at /<snap>/tarantool/src/box/applier.cc:2749
  tarantool#13 0x0000564ffd08b8c7 in replication_shutdown ()
```

Bootstrap fiber stack:

```
  tarantool#1  0x0000564ffd1e0c95 in fiber_yield_impl (will_switch_back=true)
      at /<snap>/tarantool/src/lib/core/fiber.c:863
  tarantool#2  0x0000564ffd1e0caa in fiber_yield ()
      at /<snap>/tarantool/src/lib/core/fiber.c:870
  tarantool#3  0x0000564ffd1e0f0b in fiber_yield_timeout (delay=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber.c:914
  tarantool#4  0x0000564ffd1ed401 in fiber_cond_wait_timeout
      (c=0x7f84f0780aa8, timeout=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:107
  tarantool#5  0x0000564ffd1ed61c in fiber_cond_wait_deadline
      (c=0x7f84f0780aa8, deadline=3155292512.1470647)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:128
  tarantool#6  0x0000564ffd0a0a00 in applier_wait_for_state(applier_on_state*, double)
      (trigger=0x7f84f0780a60, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2877
  tarantool#7  0x0000564ffd0a0bbb in applier_resume_to_state(applier*, applier_state, double)
      (applier=0x564fff30b1e0, state=APPLIER_JOINED, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2898
  tarantool#8  0x0000564ffd074bf4 in bootstrap_from_master(replica*) (master=0x564fff2b30d0)
      at /<snap>/tarantool/src/box/box.cc:4980
  tarantool#9  0x0000564ffd074eed in bootstrap(bool*) (is_bootstrap_leader=0x7f84f0780b81)
      at /<snap>/tarantool/src/box/box.cc:5081
  tarantool#10 0x0000564ffd07613d in box_cfg_xc() ()
      at /<snap>/tarantool/src/box/box.cc:5427
```
nshy added a commit to nshy/tarantool that referenced this pull request Dec 18, 2023
Actually newly introduced 'replication/shutdown_test.lua' has an issue
(all three subtests). The master is shutdown in time but replica
crashes. Below are replica stacks during crash. The story is next.
Master is shutdown and replica gets `SocketError` and sleeps on
reconnect timeout. Now replica is shutdown. `applier_f` loop exits
immediately without waking bootstrap fiber. Next we change state to OFF
in shutdown fiber in `applier_stop` triggering `resume_to_state`
trigger. It calls `applier_pause` that expects fiber to be applier fiber.

So we'd better to wakeup bootstrap fiber in this case. But simple change
to check cancel state after reconnection sleep does not work yet. This
time `applier_pause` exits immediately which is not the way
`resume_to_state` should work. See, if we return 0 from applier fiber
then we clear fiber diag on fiber death which is not expected by
`resume_to_state`. If we return -1, then `resume_to_state` will steal
fiber diag which is not expected by `fiber_join` in `applier_stop`.

TODO (check -1 case in experiment!)

Even if we ignore this assertion somehow then we hit assertion in
bootstrap fiber in applier_wait_for_state as it expects diag set in case
of off/stop state. But diag is get eaten by applier fiber join in
shutdown fiber.

On applier error if there is fiber using resume_to_state API we first
suspend applier fiber and only exit it when applier fiber get canceled
on applier stop. AFAIU this complicated mechanics to keep fiber alive on
errors is only to keep fiber diag for bootstrap fiber. Instead let's
move diag for bootstrap out of fiber. Now we don't need to keep fiber
alive on errors.

The current approach on passing diag has other oddities. Like we won't
finish disconnect on errors immediately. Or have to return from
applier_f -1 or 0 on errors depending on do we expect another fiber to
steal fiber diag or not.

Part of tarantool#8423

NO_TEST=rely on existing tests
NO_CHANGELOG=internal
NO_DOC=internal

Shutdown fiber stack (crash):
```
  tarantool#5  0x00007f84f0c54d26 in __assert_fail (
      assertion=0x564ffd5dfdec "fiber() == applier->fiber",
      file=0x564ffd5dedae "./src/box/applier.cc", line=2809,
      function=0x564ffd5dfdcf "void applier_pause(applier*)") at assert.c:101
  tarantool#6  0x0000564ffd0a0780 in applier_pause (applier=0x564fff30b1e0)
      at /<snap>/dev/tarantool/src/box/applier.cc:2809
  tarantool#7  0x0000564ffd0a08ab in applier_on_state_f (trigger=0x7f84f0780a60,
      event=0x564fff30b1e0) at /<snap>/dev/tarantool/src/box/applier.cc:2845
  tarantool#8  0x0000564ffd20a873 in trigger_run_list (list=0x7f84f0680de0,
      event=0x564fff30b1e0) at /<snap>/tarantool/src/lib/core/trigger.cc:100
  tarantool#9  0x0000564ffd20a991 in trigger_run (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.cc:133
  tarantool#10 0x0000564ffd0945cb in trigger_run_xc (list=0x564fff30b818, event=0x564fff30b1e0)
      at /<snap>/tarantool/src/lib/core/trigger.h:173
  tarantool#11 0x0000564ffd09689a in applier_set_state (applier=0x564fff30b1e0,
      state=APPLIER_OFF) at /<snap>/tarantool/src/box/applier.cc:83
  tarantool#12 0x0000564ffd0a0313 in applier_stop (applier=0x564fff30b1e0)
      at /<snap>/tarantool/src/box/applier.cc:2749
  tarantool#13 0x0000564ffd08b8c7 in replication_shutdown ()
```

Bootstrap fiber stack:

```
  tarantool#1  0x0000564ffd1e0c95 in fiber_yield_impl (will_switch_back=true)
      at /<snap>/tarantool/src/lib/core/fiber.c:863
  tarantool#2  0x0000564ffd1e0caa in fiber_yield ()
      at /<snap>/tarantool/src/lib/core/fiber.c:870
  tarantool#3  0x0000564ffd1e0f0b in fiber_yield_timeout (delay=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber.c:914
  tarantool#4  0x0000564ffd1ed401 in fiber_cond_wait_timeout
      (c=0x7f84f0780aa8, timeout=3153600000)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:107
  tarantool#5  0x0000564ffd1ed61c in fiber_cond_wait_deadline
      (c=0x7f84f0780aa8, deadline=3155292512.1470647)
      at /<snap>/tarantool/src/lib/core/fiber_cond.c:128
  tarantool#6  0x0000564ffd0a0a00 in applier_wait_for_state(applier_on_state*, double)
      (trigger=0x7f84f0780a60, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2877
  tarantool#7  0x0000564ffd0a0bbb in applier_resume_to_state(applier*, applier_state, double)
      (applier=0x564fff30b1e0, state=APPLIER_JOINED, timeout=3153600000)
      at /<snap>/tarantool/src/box/applier.cc:2898
  tarantool#8  0x0000564ffd074bf4 in bootstrap_from_master(replica*) (master=0x564fff2b30d0)
      at /<snap>/tarantool/src/box/box.cc:4980
  tarantool#9  0x0000564ffd074eed in bootstrap(bool*) (is_bootstrap_leader=0x7f84f0780b81)
      at /<snap>/tarantool/src/box/box.cc:5081
  tarantool#10 0x0000564ffd07613d in box_cfg_xc() ()
      at /<snap>/tarantool/src/box/box.cc:5427
```
ligurio added a commit to ligurio/tarantool that referenced this pull request May 21, 2024
[001] tarantool#4  0x65481f151c11 in luaT_httpc_io_cleanup+33
[001] tarantool#5  0x65481f19ee63 in lj_BC_FUNCC+70
[001] tarantool#6  0x65481f1aa5d5 in gc_call_finalizer+133
[001] tarantool#7  0x65481f1ab1e3 in gc_onestep+211
[001] tarantool#8  0x65481f1aba68 in lj_gc_fullgc+120
[001] tarantool#9  0x65481f1a5fb5 in lua_gc+149
[001] tarantool#10 0x65481f1b57cf in lj_cf_collectgarbage+127
[001] tarantool#11 0x65481f19ee63 in lj_BC_FUNCC+70
[001] tarantool#12 0x65481f1a5c15 in lua_pcall+117
[001] tarantool#13 0x65481f14559f in luaT_call+15
[001] tarantool#14 0x65481f13c7e1 in lua_main+97
[001] tarantool#15 0x65481f13d000 in run_script_f+2032

NO_CHANGELOG=internal
NO_DOC=internal
NO_TEST=internal
ligurio added a commit to ligurio/tarantool that referenced this pull request May 21, 2024
[001] tarantool#4  0x65481f151c11 in luaT_httpc_io_cleanup+33
[001] tarantool#5  0x65481f19ee63 in lj_BC_FUNCC+70
[001] tarantool#6  0x65481f1aa5d5 in gc_call_finalizer+133
[001] tarantool#7  0x65481f1ab1e3 in gc_onestep+211
[001] tarantool#8  0x65481f1aba68 in lj_gc_fullgc+120
[001] tarantool#9  0x65481f1a5fb5 in lua_gc+149
[001] tarantool#10 0x65481f1b57cf in lj_cf_collectgarbage+127
[001] tarantool#11 0x65481f19ee63 in lj_BC_FUNCC+70
[001] tarantool#12 0x65481f1a5c15 in lua_pcall+117
[001] tarantool#13 0x65481f14559f in luaT_call+15
[001] tarantool#14 0x65481f13c7e1 in lua_main+97
[001] tarantool#15 0x65481f13d000 in run_script_f+2032

NO_CHANGELOG=internal
NO_DOC=internal
NO_TEST=internal
locker added a commit to locker/tarantool that referenced this pull request Jun 10, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  tarantool#1  0x590be9f7eb6d in crash_collect+256
  tarantool#2  0x590be9f7f5a9 in crash_signal_cb+100
  tarantool#3  0x72b111642520 in __sigaction+80
  tarantool#4  0x590bea385e3c in load_u32+35
  tarantool#5  0x590bea231eba in field_map_get_offset+46
  tarantool#6  0x590bea23242a in tuple_field_raw_by_path+417
  tarantool#7  0x590bea23282b in tuple_field_raw_by_part+203
  tarantool#8  0x590bea23288c in tuple_field_by_part+91
  tarantool#9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  tarantool#10 0x590be9d4fba3 in tuple_hint+40
  tarantool#11 0x590be9d50acf in vy_stmt_hint+178
  tarantool#12 0x590be9d53531 in vy_page_stmt+168
  tarantool#13 0x590be9d535ea in vy_page_find_key+142
  tarantool#14 0x590be9d545e6 in vy_page_read_cb+210
  tarantool#15 0x590be9f94ef0 in cbus_call_perform+44
  tarantool#16 0x590be9f94eae in cmsg_deliver+52
  tarantool#17 0x590be9f9583e in cbus_process+100
  tarantool#18 0x590be9f958a5 in cbus_loop+28
  tarantool#19 0x590be9d512da in vy_run_reader_f+381
  tarantool#20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  tarantool#21 0x590be9f8b697 in fiber_loop+219
  tarantool#22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes tarantool#10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer
locker added a commit to locker/tarantool that referenced this pull request Jun 11, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  tarantool#1  0x590be9f7eb6d in crash_collect+256
  tarantool#2  0x590be9f7f5a9 in crash_signal_cb+100
  tarantool#3  0x72b111642520 in __sigaction+80
  tarantool#4  0x590bea385e3c in load_u32+35
  tarantool#5  0x590bea231eba in field_map_get_offset+46
  tarantool#6  0x590bea23242a in tuple_field_raw_by_path+417
  tarantool#7  0x590bea23282b in tuple_field_raw_by_part+203
  tarantool#8  0x590bea23288c in tuple_field_by_part+91
  tarantool#9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  tarantool#10 0x590be9d4fba3 in tuple_hint+40
  tarantool#11 0x590be9d50acf in vy_stmt_hint+178
  tarantool#12 0x590be9d53531 in vy_page_stmt+168
  tarantool#13 0x590be9d535ea in vy_page_find_key+142
  tarantool#14 0x590be9d545e6 in vy_page_read_cb+210
  tarantool#15 0x590be9f94ef0 in cbus_call_perform+44
  tarantool#16 0x590be9f94eae in cmsg_deliver+52
  tarantool#17 0x590be9f9583e in cbus_process+100
  tarantool#18 0x590be9f958a5 in cbus_loop+28
  tarantool#19 0x590be9d512da in vy_run_reader_f+381
  tarantool#20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  tarantool#21 0x590be9f8b697 in fiber_loop+219
  tarantool#22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes tarantool#10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer
locker added a commit that referenced this pull request Jun 13, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  #1  0x590be9f7eb6d in crash_collect+256
  #2  0x590be9f7f5a9 in crash_signal_cb+100
  #3  0x72b111642520 in __sigaction+80
  #4  0x590bea385e3c in load_u32+35
  #5  0x590bea231eba in field_map_get_offset+46
  #6  0x590bea23242a in tuple_field_raw_by_path+417
  #7  0x590bea23282b in tuple_field_raw_by_part+203
  #8  0x590bea23288c in tuple_field_by_part+91
  #9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  #10 0x590be9d4fba3 in tuple_hint+40
  #11 0x590be9d50acf in vy_stmt_hint+178
  #12 0x590be9d53531 in vy_page_stmt+168
  #13 0x590be9d535ea in vy_page_find_key+142
  #14 0x590be9d545e6 in vy_page_read_cb+210
  #15 0x590be9f94ef0 in cbus_call_perform+44
  #16 0x590be9f94eae in cmsg_deliver+52
  #17 0x590be9f9583e in cbus_process+100
  #18 0x590be9f958a5 in cbus_loop+28
  #19 0x590be9d512da in vy_run_reader_f+381
  #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  #21 0x590be9f8b697 in fiber_loop+219
  #22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes #10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer
locker added a commit that referenced this pull request Jun 13, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  #1  0x590be9f7eb6d in crash_collect+256
  #2  0x590be9f7f5a9 in crash_signal_cb+100
  #3  0x72b111642520 in __sigaction+80
  #4  0x590bea385e3c in load_u32+35
  #5  0x590bea231eba in field_map_get_offset+46
  #6  0x590bea23242a in tuple_field_raw_by_path+417
  #7  0x590bea23282b in tuple_field_raw_by_part+203
  #8  0x590bea23288c in tuple_field_by_part+91
  #9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  #10 0x590be9d4fba3 in tuple_hint+40
  #11 0x590be9d50acf in vy_stmt_hint+178
  #12 0x590be9d53531 in vy_page_stmt+168
  #13 0x590be9d535ea in vy_page_find_key+142
  #14 0x590be9d545e6 in vy_page_read_cb+210
  #15 0x590be9f94ef0 in cbus_call_perform+44
  #16 0x590be9f94eae in cmsg_deliver+52
  #17 0x590be9f9583e in cbus_process+100
  #18 0x590be9f958a5 in cbus_loop+28
  #19 0x590be9d512da in vy_run_reader_f+381
  #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  #21 0x590be9f8b697 in fiber_loop+219
  #22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes #10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer

(cherry picked from commit 19d1f1c)
locker added a commit that referenced this pull request Jun 13, 2024
`key_part::offset_slot_cache` and `key_part::format_epoch` are used for
speeding up tuple field lookup in `tuple_field_raw_by_part()`. These
structure members are accessed and updated without any locks, assuming
this code is executed exclusively in the tx thread. However, this isn't
necessarily true because we also perform tuple field lookups in vinyl
read threads. Apparently, this can result in unexpected races and bugs,
for example:

```
  #1  0x590be9f7eb6d in crash_collect+256
  #2  0x590be9f7f5a9 in crash_signal_cb+100
  #3  0x72b111642520 in __sigaction+80
  #4  0x590bea385e3c in load_u32+35
  #5  0x590bea231eba in field_map_get_offset+46
  #6  0x590bea23242a in tuple_field_raw_by_path+417
  #7  0x590bea23282b in tuple_field_raw_by_part+203
  #8  0x590bea23288c in tuple_field_by_part+91
  #9  0x590bea24cd2d in unsigned long tuple_hint<(field_type)5, false, false>(tuple*, key_def*)+103
  #10 0x590be9d4fba3 in tuple_hint+40
  #11 0x590be9d50acf in vy_stmt_hint+178
  #12 0x590be9d53531 in vy_page_stmt+168
  #13 0x590be9d535ea in vy_page_find_key+142
  #14 0x590be9d545e6 in vy_page_read_cb+210
  #15 0x590be9f94ef0 in cbus_call_perform+44
  #16 0x590be9f94eae in cmsg_deliver+52
  #17 0x590be9f9583e in cbus_process+100
  #18 0x590be9f958a5 in cbus_loop+28
  #19 0x590be9d512da in vy_run_reader_f+381
  #20 0x590be9cb4147 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*)+34
  #21 0x590be9f8b697 in fiber_loop+219
  #22 0x590bea374bb6 in coro_init+120
```

Fix this by skipping this optimization for threads other than tx.

No test is added because reproducing this race is tricky. Ideally, bugs
like this one should be caught by fuzzing tests or thread sanitizers.

Closes #10123

NO_DOC=bug fix
NO_TEST=tested manually with fuzzer

(cherry picked from commit 19d1f1c)
nshy added a commit to nshy/tarantool that referenced this pull request Jul 5, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 8, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 8, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 9, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 12, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
nshy added a commit to nshy/tarantool that referenced this pull request Jul 15, 2024
The issue is we increment `page_count` only on page write. If we fail
for some reason before then page info `min_key` in leaked.

LSAN report for 'vinyl/recovery_quota.test.lua':

```
2024-07-05 13:30:34.605 [478603] main/103/on_shutdown vy_scheduler.c:1668 E> 512/0: failed to compact range (-inf..inf)

=================================================================
==478603==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 4 byte(s) in 1 object(s) allocated from:
    #0 0x5e4ebafcae09 in malloc (/home/shiny/dev/tarantool/build-asan-debug/src/tarantool+0x1244e09) (BuildId: 20c5933d67a3831c4f43f6860379d58d35b81974)
    tarantool#1 0x5e4ebb3f9b69 in vy_key_dup /home/shiny/dev/tarantool/src/box/vy_stmt.c:308:14
    tarantool#2 0x5e4ebb49b615 in vy_page_info_create /home/shiny/dev/tarantool/src/box/vy_run.c:257:23
    tarantool#3 0x5e4ebb48f59f in vy_run_writer_start_page /home/shiny/dev/tarantool/src/box/vy_run.c:2196:6
    tarantool#4 0x5e4ebb48c6b6 in vy_run_writer_append_stmt /home/shiny/dev/tarantool/src/box/vy_run.c:2287:6
    tarantool#5 0x5e4ebb72877f in vy_task_write_run /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1132:8
    tarantool#6 0x5e4ebb73305e in vy_task_compaction_execute /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1485:9
    tarantool#7 0x5e4ebb73e152 in vy_task_f /home/shiny/dev/tarantool/src/box/vy_scheduler.c:1795:6
    tarantool#8 0x5e4ebb01e0b1 in fiber_cxx_invoke(int (*)(__va_list_tag*), __va_list_tag*) /home/shiny/dev/tarantool/src/lib/core/fiber.h:1331:10
    tarantool#9 0x5e4ebc389ee0 in fiber_loop /home/shiny/dev/tarantool/src/lib/core/fiber.c:1182:18
    tarantool#10 0x5e4ebd3e9595 in coro_init /home/shiny/dev/tarantool/third_party/coro/coro.c:108:3

SUMMARY: AddressSanitizer: 4 byte(s) leaked in 1 allocation(s).
```

NO_TEST=covered by existing tests (ASAN build)
NO_DOC=bugfix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants