Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AddressSanitizer: heap-use-after-free in GlusterFS clients #3945

Open
lvtao-sec opened this issue Jan 5, 2023 · 13 comments · May be fixed by #4490
Open

AddressSanitizer: heap-use-after-free in GlusterFS clients #3945

lvtao-sec opened this issue Jan 5, 2023 · 13 comments · May be fixed by #4490
Assignees
Milestone

Comments

@lvtao-sec
Copy link
Contributor

lvtao-sec commented Jan 5, 2023

Description of problem:
I met this heap use after free bug several times. But I can't reproduce it because it requires exact concurrency which I have no idea what it is.

The exact command to reproduce the issue:
GlusterFS cluster is configured with 3 servers and 1 client with this mode:

gluster volume create test-volume disperse 3 redundancy 1 $srvs force

This bug can sometimes be triggered by this PoC:

r0 = open$dir(&(0x7f0000000000)='./file0\x00', 0x40040, 0x0)
r1 = open(&(0x7f0000000040)='./file0\x00', 0x2300, 0x0)
fsetxattr$security_ima(r1, &(0x7f0000000080), 0x0, 0x0, 0x0)
r2 = open(&(0x7f00000000c0)='./file0/file0\x00', 0x100, 0x24)
write$binfmt_aout(r0, &(0x7f0000000640)={{0x108, 0xd8, 0x3, 0x350, 0x22f, 0x9, 0x245, 0x9}, "a30fc845338b1fc576d17087199eeb89296aefe77a34cf64359bf31dcbb5dab07ca85b4b01a39c76def457575040a300a6ae1b78df0ee3b72eeed79c924dfcc320631d34e006729738e0d07c2091e8b22ea5055afc5caaf4f9eb0a2472197c32634d499da949189cd13b4cea467ba55317de10e83608ee5f49821a17c67a7a67f4f87866562f6a92783556ab9cb424887d1a27", ['\x00', '\x00', '\x00', '\x00', '\x00']}, 0x5b3)

- Is there any crash ? Provide the backtrace and coredump
Yes, as below:

==380==WARNING: ASan doesn't fully support makecontext/swapcontext functions and may produce false positives in some cases!
=================================================================
==380==ERROR: AddressSanitizer: heap-use-after-free on address 0x6040000949a8 at pc 0x7ffff2f6a3f4 bp 0x7ffff0284060 sp 0x7ffff0284050
READ of size 4 at 0x6040000949a8 thread T6
    #0 0x7ffff2f6a3f3 in fuse_fd_inherit_directio /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:1564
    #1 0x7ffff2f6a3f3 in fuse_fd_cbk /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:1643
    #2 0x7fffeec42026 in io_stats_open_cbk /root/glusterfs/xlators/debug/io-stats/src/io-stats.c:2119
    #3 0x7ffff7492ae8 in default_open_cbk /root/glusterfs/libglusterfs/src/defaults.c:1216
    #4 0x7fffeeccbe0f in mdc_open_cbk /root/glusterfs/xlators/performance/md-cache/src/md-cache.c:2046
    #5 0x7ffff7492ae8 in default_open_cbk /root/glusterfs/libglusterfs/src/defaults.c:1216
    #6 0x7fffeed91a28 in gf_utime_open_cbk /root/glusterfs/xlators/features/utime/src/utime-autogen-fops.c:124
    #7 0x7ffff7492ae8 in default_open_cbk /root/glusterfs/libglusterfs/src/defaults.c:1216
    #8 0x7fffef064ea2 in ec_manager_open /root/glusterfs/xlators/cluster/ec/src/ec-inode-read.c:865
    #9 0x7fffef01fed0 in __ec_manager /root/glusterfs/xlators/cluster/ec/src/ec-common.c:3017
    #10 0x7fffef0203b9 in ec_resume /root/glusterfs/xlators/cluster/ec/src/ec-common.c:502
    #11 0x7fffef021c52 in ec_complete /root/glusterfs/xlators/cluster/ec/src/ec-common.c:579
    #12 0x7fffef0642f2 in ec_open_cbk /root/glusterfs/xlators/cluster/ec/src/ec-inode-read.c:741
    #13 0x7fffef2069b4 in client4_0_open_cbk /root/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:346
    #14 0x7ffff7225fca in rpc_clnt_handle_reply /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
    #15 0x7ffff7225fca in rpc_clnt_notify /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
    #16 0x7ffff721f983 in rpc_transport_notify /root/glusterfs/rpc/rpc-lib/src/rpc-transport.c:521
    #17 0x7ffff03465a6 in socket_event_poll_in_async /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
    #18 0x7ffff0356b39 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    #19 0x7ffff0356b39 in socket_event_poll_in /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
    #20 0x7ffff0356b39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
    #21 0x7ffff0356b39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
    #22 0x7ffff74006c0 in event_dispatch_epoll_handler /root/glusterfs/libglusterfs/src/event-epoll.c:631
    #23 0x7ffff74006c0 in event_dispatch_epoll_worker /root/glusterfs/libglusterfs/src/event-epoll.c:742
    #24 0x7ffff71c5608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477
    #25 0x7ffff70ea102 in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x122102)

0x6040000949a8 is located 24 bytes inside of 44-byte region [0x604000094990,0x6040000949bc)
freed by thread T9 here:
    #0 0x7ffff76a07cf in __interceptor_free (/lib/x86_64-linux-gnu/libasan.so.5+0x10d7cf)
    #1 0x7ffff735be19 in __gf_free /root/glusterfs/libglusterfs/src/mem-pool.c:383
    #2 0x7ffff2f2160f in fuse_fd_ctx_destroy /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:141
    #3 0x7ffff2f64205 in fuse_release /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:3483
    #4 0x7ffff2f5dad9 in fuse_dispatch /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6091
    #5 0x7ffff2f6fd8d in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    #6 0x7ffff2f6fd8d in fuse_thread_proc /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6326
    #7 0x7ffff71c5608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477

previously allocated by thread T9 here:
    #0 0x7ffff76a0dc6 in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10ddc6)
    #1 0x7ffff735b226 in __gf_calloc /root/glusterfs/libglusterfs/src/mem-pool.c:177
    #2 0x7ffff2f2a337 in __fuse_fd_ctx_check_n_create /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:90
    #3 0x7ffff2f2a448 in fuse_fd_ctx_check_n_create /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:116
    #4 0x7ffff2f45641 in fuse_open_resume /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:2944
    #5 0x7ffff2f67c81 in fuse_fop_resume /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:1163
    #6 0x7ffff2f1ef0c in fuse_resolve_done /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:629
    #7 0x7ffff2f1ef0c in fuse_resolve_all /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:653
    #8 0x7ffff2f1ec7c in fuse_resolve /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:620
    #9 0x7ffff2f1ef59 in fuse_resolve_all /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:650
    #10 0x7ffff2f1ef59 in fuse_resolve_all /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:638
    #11 0x7ffff2f1ce7d in fuse_resolve_continue /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:668
    #12 0x7ffff2f1e1a6 in fuse_resolve_inode /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:352
    #13 0x7ffff2f1e930 in fuse_resolve /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:617
    #14 0x7ffff2f1ef59 in fuse_resolve_all /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:650
    #15 0x7ffff2f1ef59 in fuse_resolve_all /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:638
    #16 0x7ffff2f1eff5 in fuse_resolve_and_resume /root/glusterfs/xlators/mount/fuse/src/fuse-resolve.c:680
    #17 0x7ffff2f64de5 in fuse_open /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:2981
    #18 0x7ffff2f5dad9 in fuse_dispatch /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6091
    #19 0x7ffff2f6fd8d in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    #20 0x7ffff2f6fd8d in fuse_thread_proc /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6326
    #21 0x7ffff71c5608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477

Thread T6 created by T0 here:
    #0 0x7ffff75cd805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805)
    #1 0x7ffff72feb97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261
    #2 0x7ffff731028d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284
    #3 0x7ffff73feaf2 in event_dispatch_epoll /root/glusterfs/libglusterfs/src/event-epoll.c:797
    #4 0x7ffff7359f89 in gf_event_dispatch /root/glusterfs/libglusterfs/src/event.c:115
    #5 0x7ffff7467b7f in gf_io_main /root/glusterfs/libglusterfs/src/gf-io.c:431
    #6 0x7ffff7467b7f in gf_io_run /root/glusterfs/libglusterfs/src/gf-io.c:516
    #7 0x55555556c37a in main /root/glusterfs/glusterfsd/src/glusterfsd.c:2774
    #8 0x7ffff6fef0b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)

Thread T9 created by T7 here:
    #0 0x7ffff75cd805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805)
    #1 0x7ffff72feb97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261
    #2 0x7ffff731028d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284
    #3 0x7ffff2f712a9 in notify /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6582
    #4 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #5 0x7ffff74f5c70 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3382
    #6 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #7 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #8 0x7fffeec6111b in notify /root/glusterfs/xlators/debug/io-stats/src/io-stats.c:4335
    #9 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #10 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #11 0x7fffeec935e0 in notify /root/glusterfs/xlators/performance/io-threads/src/io-threads.c:1333
    #12 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #13 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #14 0x7fffeecf14e3 in mdc_notify /root/glusterfs/xlators/performance/md-cache/src/md-cache.c:3827
    #15 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #16 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #17 0x7fffeed1b973 in qr_notify /root/glusterfs/xlators/performance/quick-read/src/quick-read.c:1506
    #18 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #19 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #20 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #21 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #22 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #23 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #24 0x7fffeed8f5cf in notify ../../../../xlators/features/utime/src/utime.c:318
    #25 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #26 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #27 0x7fffeef0757f in dht_notify /root/glusterfs/xlators/cluster/dht/src/dht-common.c:11252
    #28 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #29 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #30 0x7fffef01613f in ec_notify /root/glusterfs/xlators/cluster/ec/src/ec.c:680
    #31 0x7fffef016986 in notify /root/glusterfs/xlators/cluster/ec/src/ec.c:697
    #32 0x7ffff72e8474 in xlator_notify /root/glusterfs/libglusterfs/src/xlator.c:709
    #33 0x7ffff74f5833 in default_notify /root/glusterfs/libglusterfs/src/defaults.c:3387
    #34 0x7fffef19deda in client_notify_dispatch /root/glusterfs/xlators/protocol/client/src/client.c:146
    #35 0x7fffef19e1d9 in client_notify_dispatch_uniq /root/glusterfs/xlators/protocol/client/src/client.c:118
    #36 0x7fffef20b785 in client_notify_parents_child_up /root/glusterfs/xlators/protocol/client/src/client-handshake.c:53
    #37 0x7fffef21094f in client_post_handshake /root/glusterfs/xlators/protocol/client/src/client-handshake.c:443
    #38 0x7fffef21094f in client_setvolume_cbk /root/glusterfs/xlators/protocol/client/src/client-handshake.c:628
    #39 0x7ffff7225fca in rpc_clnt_handle_reply /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
    #40 0x7ffff7225fca in rpc_clnt_notify /root/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
    #41 0x7ffff721f983 in rpc_transport_notify /root/glusterfs/rpc/rpc-lib/src/rpc-transport.c:521
    #42 0x7ffff03465a6 in socket_event_poll_in_async /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
    #43 0x7ffff0356b39 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    #44 0x7ffff0356b39 in socket_event_poll_in /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
    #45 0x7ffff0356b39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
    #46 0x7ffff0356b39 in socket_event_handler /root/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
    #47 0x7ffff74006c0 in event_dispatch_epoll_handler /root/glusterfs/libglusterfs/src/event-epoll.c:631
    #48 0x7ffff74006c0 in event_dispatch_epoll_worker /root/glusterfs/libglusterfs/src/event-epoll.c:742
    #49 0x7ffff71c5608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477

Thread T7 created by T0 here:
    #0 0x7ffff75cd805 in pthread_create (/lib/x86_64-linux-gnu/libasan.so.5+0x3a805)
    #1 0x7ffff72feb97 in gf_thread_vcreate /root/glusterfs/libglusterfs/src/common-utils.c:3261
    #2 0x7ffff731028d in gf_thread_create /root/glusterfs/libglusterfs/src/common-utils.c:3284
    #3 0x7ffff73feaf2 in event_dispatch_epoll /root/glusterfs/libglusterfs/src/event-epoll.c:797
    #4 0x7ffff7359f89 in gf_event_dispatch /root/glusterfs/libglusterfs/src/event.c:115
    #5 0x7ffff7467b7f in gf_io_main /root/glusterfs/libglusterfs/src/gf-io.c:431
    #6 0x7ffff7467b7f in gf_io_run /root/glusterfs/libglusterfs/src/gf-io.c:516
    #7 0x55555556c37a in main /root/glusterfs/glusterfsd/src/glusterfsd.c:2774
    #8 0x7ffff6fef0b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)

SUMMARY: AddressSanitizer: heap-use-after-free /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:1564 in fuse_fd_inherit_directio
Shadow bytes around the buggy address:
  0x0c088000a8e0: fa fa fd fd fd fd fd fd fa fa fa fa fa fa fa fa
  0x0c088000a8f0: fa fa fd fd fd fd fd fd fa fa fd fd fd fd fd fd
  0x0c088000a900: fa fa fa fa fa fa fa fa fa fa fd fd fd fd fd fa
  0x0c088000a910: fa fa fa fa fa fa fa fa fa fa fd fd fd fd fd fd
  0x0c088000a920: fa fa fd fd fd fd fd fd fa fa fa fa fa fa fa fa
=>0x0c088000a930: fa fa fd fd fd[fd]fd fd fa fa 00 00 00 00 00 06
  0x0c088000a940: fa fa fd fd fd fd fd fd fa fa fd fd fd fd fd fd
  0x0c088000a950: fa fa fa fa fa fa fa fa fa fa fd fd fd fd fd fd
  0x0c088000a960: fa fa fd fd fd fd fd fd fa fa fd fd fd fd fd fa
  0x0c088000a970: fa fa fa fa fa fa fa fa fa fa fd fd fd fd fd fa
  0x0c088000a980: fa fa fd fd fd fd fd fd fa fa fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==380==ABORTING

- The operating system / glusterfs version:
Ubuntu 20.04 LTS with kernel 5.15
GlusterFS 79154ae

@lvtao-sec
Copy link
Contributor Author

Hi @mohit84 ,

I reported this bug several months ago. Recently, I'm trying to debug it and fix it. However, there are some issues still making me confused. Would you mind helping me? Thanks in advance.

First, the IOT worker thread reads a FUSE_OPEN request from the /dev/fuse device and sends the request to servers, during which a fuse_fd_ctx_t is allocated.

Before the reply are sent back from servers, a another fuse operation FUSE_RELEASE is executed and this fuse_fd_ctx_t is released.

Finally, when the replies are back and use the fuse_fd_ctx_t, a use-after-free bug is triggered.

Here, I'm confused about how FUSE_OPEN and FUSE_RELEASE can happen on the same file concurrently, since the FUSE_RELEASE requires the file descriptor from the FUSE_OPEN.

Do you have any idea about this? Looking forward to seeing your help.

@mohit84
Copy link
Contributor

mohit84 commented Jun 5, 2023

Hi @mohit84 ,

I reported this bug several months ago. Recently, I'm trying to debug it and fix it. However, there are some issues still making me confused. Would you mind helping me? Thanks in advance.

First, the IOT worker thread reads a FUSE_OPEN request from the /dev/fuse device and sends the request to servers, during which a fuse_fd_ctx_t is allocated.

Before the reply are sent back from servers, a another fuse operation FUSE_RELEASE is executed and this fuse_fd_ctx_t is released.

Finally, when the replies are back and use the fuse_fd_ctx_t, a use-after-free bug is triggered.

Here, I'm confused about how FUSE_OPEN and FUSE_RELEASE can happen on the same file concurrently, since the FUSE_RELEASE requires the file descriptor from the FUSE_OPEN.

Do you have any idea about this? Looking forward to seeing your help.

Can you please try to reproduce the same after disabling open-behind?

@lvtao-sec
Copy link
Contributor Author

Hi @mohit84 ,

Thanks a lot for your reply.
I can't definitely reproduce this bug, considering it's a concurrency bug.

I'm continuously fuzzing it, and this bug is triggered sometimes.
I checked the gluster configuration in my fuzzing env. The open-behind is disabled.

The key confusion about this bug for me now is how can FUSE_OPEN and FUSE_RELEASE happens on the same file/fd concurrently. Do you have any idea about this?

Thanks in advance.

@mohit84
Copy link
Contributor

mohit84 commented Jun 5, 2023

I checked the gluster configuration in my fuzzing env. The open-behind is disabled.

It can happen during graph switch. During graph switch it(fuse) opens a new fd on a new subvol and unref old fd.

@lvtao-sec
Copy link
Contributor Author

Thanks for your reply.

Does here the fuse means the gluster fuse or kernel fuse module?
Here the fuse_fd_ctx_t memory is released when receiving a FUSE_RELEASE request from kernel fuse module.
Can this FUSE_RELEASE be issued from a graph switch?

Best,

@mohit84
Copy link
Contributor

mohit84 commented Jun 5, 2023

Thanks for your reply.

Does here the fuse means the gluster fuse or kernel fuse module? Here the fuse_fd_ctx_t memory is released when receiving a FUSE_RELEASE request from kernel fuse module. Can this FUSE_RELEASE be issued from a graph switch?

Best,

here fuse means gluster fuse, during fd_unref if refcount has reached 0 then fuse winds a release(dir) fop during fd cleanup(fd_destroy) so it can be possible. I don't think we can ignore it. I am not sure in what scenario you faced this issue.

@lvtao-sec
Copy link
Contributor Author

lvtao-sec commented Jun 5, 2023

here fuse means gluster fuse, during fd_unref if refcount has reached 0 then fuse winds a release(dir) fop during fd cleanup(fd_destroy) so it can be possible.

However, the memory release is done by FUSE_RELEASE request (leading to the fuse_release function call). In theory, this can only be sent from kernel and gluster fuse execute it. Gluster fuse cannot issue FUSE_RELEASE, right? See the backtrace of the free function below:

0x6040000949a8 is located 24 bytes inside of 44-byte region [0x604000094990,0x6040000949bc)
freed by thread T9 here:
    #0 0x7ffff76a07cf in __interceptor_free (/lib/x86_64-linux-gnu/libasan.so.5+0x10d7cf)
    #1 0x7ffff735be19 in __gf_free /root/glusterfs/libglusterfs/src/mem-pool.c:383
    #2 0x7ffff2f2160f in fuse_fd_ctx_destroy /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:141
    #3 0x7ffff2f64205 in fuse_release /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:3483
    #4 0x7ffff2f5dad9 in fuse_dispatch /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6091
    #5 0x7ffff2f6fd8d in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    #6 0x7ffff2f6fd8d in fuse_thread_proc /root/glusterfs/xlators/mount/fuse/src/fuse-bridge.c:6326
    #7 0x7ffff71c5608 in start_thread /build/glibc-YYA7BZ/glibc-2.31/nptl/pthread_create.c:477

The gluster configuration is: gluster volume create test-volume disperse 3 redundancy 1 $srvs force

This bug can sometimes be triggered by this PoC:

r0 = open$dir(&(0x7f0000000000)='./file0\x00', 0x40040, 0x0)
r1 = open(&(0x7f0000000040)='./file0\x00', 0x2300, 0x0)
fsetxattr$security_ima(r1, &(0x7f0000000080), 0x0, 0x0, 0x0)
r2 = open(&(0x7f00000000c0)='./file0/file0\x00', 0x100, 0x24)
write$binfmt_aout(r0, &(0x7f0000000640)={{0x108, 0xd8, 0x3, 0x350, 0x22f, 0x9, 0x245, 0x9}, "a30fc845338b1fc576d17087199eeb89296aefe77a34cf64359bf31dcbb5dab07ca85b4b01a39c76def457575040a300a6ae1b78df0ee3b72eeed79c924dfcc320631d34e006729738e0d07c2091e8b22ea5055afc5caaf4f9eb0a2472197c32634d499da949189cd13b4cea467ba55317de10e83608ee5f49821a17c67a7a67f4f87866562f6a92783556ab9cb424887d1a27", ['\x00', '\x00', '\x00', '\x00', '\x00']}, 0x5b3)

@mohit84
Copy link
Contributor

mohit84 commented Jun 5, 2023

write$binfmt_aout

Yes in this case FUSE_RELEASE is triggered by the kernel. Is it possible to capture the fuse dump you need to pass the file-name to the option dump-fuse to a client process?

@lvtao-sec
Copy link
Contributor Author

Hi,

I dumped the fuse dump below. But the issue is that the execution during the dump doesn't trigger this bug because we have no idea what the concurrency requirements are.

023-06-06T14:52:36.755376777+02:00 "GLUSTER\xf5" INIT {Len:56 Opcode:26 Unique:2 Nodeid:0 Uid:0 Gid:0 Pid:0 Padding:0} {Major:7 Minor:34 MaxReadahead:131072 Flags:872415227} 
2023-06-06T14:52:36.75631691+02:00 "GLUSTER\xf5" {Len:80 Error:0 Unique:2} {Major:7 Minor:24 MaxReadahead:131072 Flags:42083 MaxBackground:64 CongestionThreshold:48 MaxWrite:131072 TimeGran:0 Unused:0} 
2023-06-06T14:52:36.771342495+02:00 "GLUSTER\xf5" GETATTR {Len:56 Opcode:3 Unique:4 Nodeid:1 Uid:0 Gid:0 Pid:374 Padding:0} {GetattrFlags:0 Dummy:0 Fh:0} 
2023-06-06T14:52:36.777033441+02:00 "GLUSTER\xf5" {Len:120 Error:0 Unique:4} {AttrValid:1 AttrValidNsec:0 Dummy:32767 Attr:{Ino:1 Size:4096 Blocks:16 Atime:1671871720 Mtime:1686055952 Ctime:1686055954 Atimensec:775000000 Mtimensec:632891332 Ctimensec:356891232 Mode:16877 Nlink:3 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:24848}} 
2023-06-06T14:53:12.097928777+02:00 "GLUSTER\xf5" GETATTR {Len:56 Opcode:3 Unique:6 Nodeid:1 Uid:0 Gid:0 Pid:393 Padding:0} {GetattrFlags:0 Dummy:0 Fh:0} 
2023-06-06T14:53:12.105393265+02:00 "GLUSTER\xf5" {Len:120 Error:0 Unique:6} {AttrValid:1 AttrValidNsec:0 Dummy:32767 Attr:{Ino:1 Size:4096 Blocks:16 Atime:1671871720 Mtime:1686055952 Ctime:1686055954 Atimensec:775000000 Mtimensec:632891332 Ctimensec:356891232 Mode:16877 Nlink:3 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:24848}} 
2023-06-06T14:53:12.105725459+02:00 "GLUSTER\xf5" LOOKUP {Len:43 Opcode:1 Unique:8 Nodeid:1 Uid:0 Gid:0 Pid:393 Padding:0} ft 
2023-06-06T14:53:12.117132765+02:00 "GLUSTER\xf5" {Len:16 Error:-2 Unique:8} "" 
2023-06-06T14:53:12.117545225+02:00 "GLUSTER\xf5" LOOKUP {Len:43 Opcode:1 Unique:10 Nodeid:1 Uid:0 Gid:0 Pid:393 Padding:0} ft 
2023-06-06T14:53:12.127953955+02:00 "GLUSTER\xf5" {Len:16 Error:-2 Unique:10} "" 
2023-06-06T14:53:12.128233256+02:00 "GLUSTER\xf5" CREATE {Len:59 Opcode:35 Unique:12 Nodeid:1 Uid:0 Gid:0 Pid:393 Padding:0} {Flags:32833 Mode:33261 Umask:18 Padding:0} ft 
2023-06-06T14:53:12.154402494+02:00 "GLUSTER\xf5" {Len:160 Error:0 Unique:12} {Nodeid:106721347455896 Generation:0 EntryValid:1 AttrValid:1 EntryValidNsec:0 AttrValidNsec:0 Attr:{Ino:11923243485788492331 Size:0 Blocks:0 Atime:1686055992 Mtime:1686055992 Ctime:1686055992 Atimensec:135402741 Mtimensec:135402741 Ctimensec:135402741 Mode:33261 Nlink:1 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:0}} {Fh:106515188986360 OpenFlags:0 Padding:0} 
2023-06-06T14:53:12.15728557+02:00 "GLUSTER\xf5" GETXATTR {Len:68 Opcode:22 Unique:14 Nodeid:106721347455896 Uid:0 Gid:0 Pid:393 Padding:0} {Size:0 Padding:0} security.capability 
2023-06-06T14:53:12.157451647+02:00 "GLUSTER\xf5" {Len:16 Error:-61 Unique:14} "" 
2023-06-06T14:53:12.15779406+02:00 "GLUSTER\xf5" WRITE {Len:17200 Opcode:16 Unique:16 Nodeid:106721347455896 Uid:0 Gid:0 Pid:393 Padding:0} {Fh:106515188986360 Offset:0 Size:17120 WriteFlags:0 LockOwner:0 Flags:32769 Padding:0} "\u007fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00>\x00\x01\x00\x00\x00\xc0\x11\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00 ;\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00@\x008\x00\r\x00@\x00\x1f\x00\x1e\x00\x06\x00\x00\x00\x04\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00\xd8\x02\x00\x00\x00\x00\x00\x00\xd8\x02\x00\x00\x00\x00\x00\x00\b\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x18\x03\x00\x00\x00\x00\x00\x00\x18\x03\x00\x00\x00\x00\x00\x00\x18\x03\x00\x00\x00\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00@\t\x00\x00\x00\x00\x00\x00@\t\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x005\v\x00\x00\x00\x00\x00\x005\v\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x04\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\b\x04\x00\x00\x00\x00\x00\x00\b\x04\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x06\x00\x00\x00P-\x00\x00\x00\x00\x00\x00P=\x00\x00\x00\x00\x00\x00P=\x00\x00\x00\x00\x00\x00\xd0\x02\x00\x00\x00\x00\x00\x00\xf8\x03\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x06\x00\x00\x00`-\x00\x00\x00\x00\x00\x00`=\x00\x00\x00\x00\x00\x00`=\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\b\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x008\x03\x00\x00\x00\x00\x00\x008\x03\x00\x00\x00\x00\x00\x008\x03\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\b\x00\x00\x00\x00\x00\x00\x00"... 17120 
2023-06-06T14:53:12.159876995+02:00 "GLUSTER\xf5" {Len:24 Error:0 Unique:16} {Size:17120 Padding:0} 
2023-06-06T14:53:12.159983571+02:00 "GLUSTER\xf5" GETXATTR {Len:68 Opcode:22 Unique:18 Nodeid:106721347455896 Uid:0 Gid:0 Pid:393 Padding:0} {Size:0 Padding:0} security.capability 
2023-06-06T14:53:12.160125964+02:00 "GLUSTER\xf5" {Len:16 Error:-61 Unique:18} "" 
2023-06-06T14:53:12.160620874+02:00 "GLUSTER\xf5" SETATTR {Len:128 Opcode:4 Unique:20 Nodeid:106721347455896 Uid:0 Gid:0 Pid:393 Padding:0} {Valid:584 Padding:0 Fh:106515188986360 Size:17120 LockOwner:12322600559131179095 Atime:0 Mtime:0 Ctime:0 Atimensec:0 Mtimensec:0 Ctimensec:0 Mode:0 Unused4:0 Uid:0 Gid:0 Unused5:0} 
2023-06-06T14:53:12.185974903+02:00 "GLUSTER\xf5" {Len:120 Error:0 Unique:20} {AttrValid:1 AttrValidNsec:0 Dummy:32767 Attr:{Ino:11923243485788492331 Size:17120 Blocks:34 Atime:1686055992 Mtime:1686055992 Ctime:1686055992 Atimensec:135402741 Mtimensec:162353288 Ctimensec:162353288 Mode:33261 Nlink:1 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:0}} 
2023-06-06T14:53:12.186262725+02:00 "GLUSTER\xf5" FLUSH {Len:64 Opcode:25 Unique:22 Nodeid:106721347455896 Uid:0 Gid:0 Pid:393 Padding:0} {Fh:106515188986360 Unused:0 Padding:0 LockOwner:12322600559131179095} 
2023-06-06T14:53:12.190218992+02:00 "GLUSTER\xf5" {Len:16 Error:0 Unique:22} 
2023-06-06T14:53:12.190259554+02:00 "GLUSTER\xf5" RELEASE {Len:64 Opcode:18 Unique:24 Nodeid:106721347455896 Uid:0 Gid:0 Pid:0 Padding:0} {Fh:106515188986360 Flags:32769 ReleaseFlags:0 LockOwner:0} 
2023-06-06T14:53:12.190818066+02:00 "GLUSTER\xf5" {Len:16 Error:0 Unique:24} 
2023-06-06T14:53:25.988318122+02:00 "GLUSTER\xf5" GETATTR {Len:56 Opcode:3 Unique:26 Nodeid:1 Uid:0 Gid:0 Pid:262 Padding:0} {GetattrFlags:0 Dummy:0 Fh:0} 
2023-06-06T14:53:25.996953545+02:00 "GLUSTER\xf5" {Len:120 Error:0 Unique:26} {AttrValid:1 AttrValidNsec:0 Dummy:32767 Attr:{Ino:1 Size:4096 Blocks:16 Atime:1671871720 Mtime:1686055992 Ctime:1686055992 Atimensec:775000000 Mtimensec:135402741 Ctimensec:135402741 Mode:16877 Nlink:3 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:24848}} 
2023-06-06T14:53:29.314940219+02:00 "GLUSTER\xf5" GETATTR {Len:56 Opcode:3 Unique:28 Nodeid:1 Uid:0 Gid:0 Pid:394 Padding:0} {GetattrFlags:0 Dummy:0 Fh:0} 
2023-06-06T14:53:29.323370023+02:00 "GLUSTER\xf5" {Len:120 Error:0 Unique:28} {AttrValid:1 AttrValidNsec:0 Dummy:32767 Attr:{Ino:1 Size:4096 Blocks:16 Atime:1671871720 Mtime:1686055992 Ctime:1686055992 Atimensec:775000000 Mtimensec:135402741 Ctimensec:135402741 Mode:16877 Nlink:3 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:24848}} 
2023-06-06T14:53:29.323649796+02:00 "GLUSTER\xf5" LOOKUP {Len:43 Opcode:1 Unique:30 Nodeid:1 Uid:0 Gid:0 Pid:394 Padding:0} ft 
2023-06-06T14:53:29.327955857+02:00 "GLUSTER\xf5" {Len:144 Error:0 Unique:30} {Nodeid:106721347455896 Generation:0 EntryValid:1 AttrValid:1 EntryValidNsec:0 AttrValidNsec:0 Attr:{Ino:11923243485788492331 Size:17120 Blocks:34 Atime:1686055992 Mtime:1686055992 Ctime:1686055992 Atimensec:135402741 Mtimensec:162353288 Ctimensec:162353288 Mode:33261 Nlink:1 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:0}} 
2023-06-06T14:53:29.328116572+02:00 "GLUSTER\xf5" OPEN {Len:48 Opcode:14 Unique:32 Nodeid:106721347455896 Uid:0 Gid:0 Pid:394 Padding:0} {Flags:32800 Unused:0} 
2023-06-06T14:53:29.328851026+02:00 "GLUSTER\xf5" {Len:32 Error:0 Unique:32} {Fh:106515189100824 OpenFlags:2 Padding:0} 
2023-06-06T14:53:29.32907501+02:00 "GLUSTER\xf5" GETXATTR {Len:68 Opcode:22 Unique:34 Nodeid:106721347455896 Uid:0 Gid:0 Pid:394 Padding:0} {Size:24 Padding:0} security.capability 
2023-06-06T14:53:29.329146211+02:00 "GLUSTER\xf5" {Len:16 Error:-61 Unique:34} "" 
2023-06-06T14:53:29.331480338+02:00 "GLUSTER\xf5" LOOKUP {Len:46 Opcode:1 Unique:36 Nodeid:1 Uid:0 Gid:0 Pid:395 Padding:0} file0 
2023-06-06T14:53:29.339201079+02:00 "GLUSTER\xf5" {Len:16 Error:-2 Unique:36} "" 
2023-06-06T14:53:29.339287473+02:00 "GLUSTER\xf5" CREATE {Len:62 Opcode:35 Unique:38 Nodeid:1 Uid:0 Gid:0 Pid:395 Padding:0} {Flags:294976 Mode:32768 Umask:18 Padding:0} file0 
2023-06-06T14:53:29.350823325+02:00 "GLUSTER\xf5" {Len:160 Error:0 Unique:38} {Nodeid:106721347493016 Generation:0 EntryValid:1 AttrValid:1 EntryValidNsec:0 AttrValidNsec:0 Attr:{Ino:13581774064524457119 Size:0 Blocks:0 Atime:1686056009 Mtime:1686056009 Ctime:1686056009 Atimensec:343030681 Mtimensec:343030681 Ctimensec:343030681 Mode:32768 Nlink:1 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:0}} {Fh:106515189021976 OpenFlags:0 Padding:0} 
2023-06-06T14:53:29.351126351+02:00 "GLUSTER\xf5" GETATTR {Len:56 Opcode:3 Unique:40 Nodeid:1 Uid:0 Gid:0 Pid:395 Padding:0} {GetattrFlags:0 Dummy:0 Fh:0} 
2023-06-06T14:53:29.355099765+02:00 "GLUSTER\xf5" {Len:120 Error:0 Unique:40} {AttrValid:1 AttrValidNsec:0 Dummy:32767 Attr:{Ino:1 Size:4096 Blocks:16 Atime:1671871720 Mtime:1686056009 Ctime:1686056009 Atimensec:775000000 Mtimensec:343030681 Ctimensec:343030681 Mode:16877 Nlink:3 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:24848}} 
2023-06-06T14:53:29.355219669+02:00 "GLUSTER\xf5" OPEN {Len:48 Opcode:14 Unique:42 Nodeid:106721347493016 Uid:0 Gid:0 Pid:395 Padding:0} {Flags:40960 Unused:0} 
2023-06-06T14:53:29.359222984+02:00 "GLUSTER\xf5" {Len:32 Error:0 Unique:42} {Fh:106515189101048 OpenFlags:2 Padding:0} 
2023-06-06T14:53:29.359347198+02:00 "GLUSTER\xf5" GETXATTR {Len:68 Opcode:22 Unique:44 Nodeid:106721347493016 Uid:0 Gid:0 Pid:395 Padding:0} {Size:0 Padding:0} security.capability 
2023-06-06T14:53:29.359425082+02:00 "GLUSTER\xf5" {Len:16 Error:-61 Unique:44} "" 
2023-06-06T14:53:29.359726746+02:00 "GLUSTER\xf5" SETATTR {Len:128 Opcode:4 Unique:46 Nodeid:106721347493016 Uid:0 Gid:0 Pid:395 Padding:0} {Valid:520 Padding:0 Fh:0 Size:0 LockOwner:2682281394363050686 Atime:0 Mtime:0 Ctime:0 Atimensec:0 Mtimensec:0 Ctimensec:0 Mode:0 Unused4:0 Uid:0 Gid:0 Unused5:0} 
2023-06-06T14:53:29.370954597+02:00 "GLUSTER\xf5" {Len:120 Error:0 Unique:46} {AttrValid:1 AttrValidNsec:0 Dummy:32767 Attr:{Ino:13581774064524457119 Size:0 Blocks:0 Atime:1686056009 Mtime:1686056009 Ctime:1686056009 Atimensec:343030681 Mtimensec:360399122 Ctimensec:360399122 Mode:32768 Nlink:1 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:0}} 
2023-06-06T14:53:29.37122347+02:00 "GLUSTER\xf5" SETXATTR {Len:61 Opcode:21 Unique:48 Nodeid:106721347493016 Uid:0 Gid:0 Pid:395 Padding:0} {Size:0 Flags:0} security.ima "" 
2023-06-06T14:53:29.375347395+02:00 "GLUSTER\xf5" {Len:16 Error:0 Unique:48} 
2023-06-06T14:53:29.376417321+02:00 "GLUSTER\xf5" FLUSH {Len:64 Opcode:25 Unique:50 Nodeid:106721347493016 Uid:0 Gid:0 Pid:395 Padding:0} {Fh:106515189021976 Unused:0 Padding:0 LockOwner:2682281394363050686} 
2023-06-06T14:53:29.377774026+02:00 "GLUSTER\xf5" {Len:16 Error:0 Unique:50} 
2023-06-06T14:53:29.377860999+02:00 "GLUSTER\xf5" FLUSH {Len:64 Opcode:25 Unique:52 Nodeid:106721347493016 Uid:0 Gid:0 Pid:395 Padding:0} {Fh:106515189101048 Unused:0 Padding:0 LockOwner:2682281394363050686} 
2023-06-06T14:53:29.378654235+02:00 "GLUSTER\xf5" {Len:16 Error:0 Unique:52} 
2023-06-06T14:53:29.378770871+02:00 "GLUSTER\xf5" RELEASE {Len:64 Opcode:18 Unique:54 Nodeid:106721347493016 Uid:0 Gid:0 Pid:0 Padding:0} {Fh:106515189101048 Flags:40960 ReleaseFlags:0 LockOwner:0} 
2023-06-06T14:53:29.378998523+02:00 "GLUSTER\xf5" {Len:16 Error:0 Unique:54} 
2023-06-06T14:53:29.379285285+02:00 "GLUSTER\xf5" RELEASE {Len:64 Opcode:18 Unique:56 Nodeid:106721347493016 Uid:0 Gid:0 Pid:0 Padding:0} {Fh:106515189021976 Flags:294912 ReleaseFlags:0 LockOwner:0} 
2023-06-06T14:53:29.379365776+02:00 "GLUSTER\xf5" {Len:16 Error:0 Unique:56} 
2023-06-06T14:53:29.379592734+02:00 "GLUSTER\xf5" RELEASE {Len:64 Opcode:18 Unique:58 Nodeid:106721347455896 Uid:0 Gid:0 Pid:0 Padding:0} {Fh:106515189100824 Flags:32800 ReleaseFlags:0 LockOwner:0} 
2023-06-06T14:53:29.379771245+02:00 "GLUSTER\xf5" {Len:16 Error:0 Unique:58} 
2023-06-06T14:53:36.578013139+02:00 "GLUSTER\xf5" GETATTR {Len:56 Opcode:3 Unique:60 Nodeid:1 Uid:0 Gid:0 Pid:262 Padding:0} {GetattrFlags:0 Dummy:0 Fh:0} 
2023-06-06T14:53:36.586466988+02:00 "GLUSTER\xf5" {Len:120 Error:0 Unique:60} {AttrValid:1 AttrValidNsec:0 Dummy:32767 Attr:{Ino:1 Size:4096 Blocks:16 Atime:1671871720 Mtime:1686056009 Ctime:1686056009 Atimensec:775000000 Mtimensec:343030681 Ctimensec:343030681 Mode:16877 Nlink:3 Uid:0 Gid:0 Rdev:0 Blksize:131072 Padding:24848}}

@pranithk pranithk self-assigned this Jan 17, 2025
@pranithk pranithk added this to the Gluster 11.2 milestone Feb 7, 2025
@pranithk
Copy link
Member

Found the issue:
fuse_release() and fuse_releasedir() are releasing the memory even before the refs become 0. This is leading to use after free in the race mentioned in the stack trace. Will be sending a PR when I get some time. Also found that in the cbks of fuse xlator releasedir() is missing, will fix that too.

@mykaul
Copy link
Contributor

mykaul commented Feb 14, 2025

Is this order incorrect perhaps?

    fuse_fd_ctx_destroy(this, state->fd);
    fd_unref(state->fd);

@pranithk
Copy link
Member

Correct. I will send a fix for this. This release should happen when xlator release()/releasedir() are called.

@pranithk
Copy link
Member

Is this order incorrect perhaps?

fuse_fd_ctx_destroy(this, state->fd);
fd_unref(state->fd);

Something like this. Need to test this patch, when I get some time.

diff --git a/xlators/mount/fuse/src/fuse-bridge.c b/xlators/mount/fuse/src/fuse-bridge.c
index 05eae9439c..027dedca38 100644
--- a/xlators/mount/fuse/src/fuse-bridge.c
+++ b/xlators/mount/fuse/src/fuse-bridge.c
@@ -3452,13 +3452,8 @@ fuse_release(xlator_t *this, fuse_in_header_t *finh, void *msg,

     fd_close(state->fd);

-    fuse_fd_ctx_destroy(this, state->fd);
-    fd_unref(fd);
-
     gf_fdptr_put(priv->fdtable, fd);

-    state->fd = NULL;
-
 out:
     send_fuse_err(this, finh, 0);

@@ -3904,13 +3899,8 @@ fuse_releasedir(xlator_t *this, fuse_in_header_t *finh, void *msg,
     gf_log("glusterfs-fuse", GF_LOG_TRACE,
            "finh->unique: %" PRIu64 ": RELEASEDIR %p", finh->unique, state->fd);

-    fuse_fd_ctx_destroy(this, state->fd);
-    fd_unref(state->fd);
-
     gf_fdptr_put(priv->fdtable, state->fd);

-    state->fd = NULL;
-
 out:
     send_fuse_err(this, finh, 0);

@@ -7101,7 +7091,8 @@ struct xlator_fops fops;

 struct xlator_cbks cbks = {.invalidate = fuse_invalidate,
                            .forget = fuse_forget_cbk,
-                           .release = fuse_internal_release};
+                           .release = fuse_internal_release,
+                           .releasedir = fuse_internal_release};

pranithk added a commit to pranithk/glusterfs that referenced this issue Feb 28, 2025
Problem:
fuse_fd_ctx_destroy() is being called in
fuse_release()/fuse_releasedir() even before all the refs on the fd
are released. This can lead to race situations where the fd_ctx is
accessed after freeing.

Fix:
Make fuse_release()/fuse_releasedir() do the unrefs and let the final
unref call xlator's release()/releasedir() like they are supposed to.

Fixes: gluster#3945
Change-Id: If01acae815dd7a2b99eb012fff17ce2d044aa9dc
Signed-off-by: Pranith Kumar Karampuri <pranith.karampuri@phonepe.com>
@pranithk pranithk linked a pull request Feb 28, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants