Skip to content

Increase min block size in stripe translator#5

Closed
alexbers wants to merge 3 commits intogluster:masterfrom
alexbers:master
Closed

Increase min block size in stripe translator#5
alexbers wants to merge 3 commits intogluster:masterfrom
alexbers:master

Conversation

@alexbers
Copy link
Contributor

@alexbers alexbers commented Mar 7, 2012

Stripe translator not works with block sizes <16kb and quietly corrupts data
instead.

See https://bugzilla.redhat.com/show_bug.cgi?id=800326

alexbers and others added 3 commits March 7, 2012 21:16
Stripe translator not works with block sizes <16kb and quietly corrupts data
instead.

See https://bugzilla.redhat.com/show_bug.cgi?id=800326
Conflicts:
	xlators/cluster/stripe/src/stripe.c
@avati
Copy link
Member

avati commented Sep 7, 2013

Please submit this patch through review.gluster.org. More details at http://www.gluster.org/community/documentation/index.php/Development_Work_Flow

@avati avati closed this Sep 7, 2013
mscherer pushed a commit that referenced this pull request Jul 28, 2017
…ction

Problem: Sometime brick process is getting crash in notify function at the
         time of cleanup db connection while brick mux is enabled.

Solution: In changetimerrecorder (ctr) notify function after cleanup
          db connection set to db_conn to NULL to avoid reuse the same
          db connection again.

Note: Below is the backtrace pattern showing by brick process
      #0  0x00007ff98a30c1f7 in raise () from /lib64/libc.so.6
      #1  0x00007ff98a30d8e8 in abort () from /lib64/libc.so.6
      #2  0x00007ff98a34bf47 in __libc_message () from /lib64/libc.so.6
      #3  0x00007ff98a351b54 in malloc_printerr () from /lib64/libc.so.6
      #4  0x00007ff98a3537aa in _int_free () from /lib64/libc.so.6
      #5  0x00007ff97d95e311 in gf_sql_connection_fini (sql_connection=sql_connection@entry=0x7ff8e8496b50) at gfdb_sqlite3.c:42
      #6  0x00007ff97d95e38a in gf_sqlite3_fini (db_conn=0x7ff92ca04470) at gfdb_sqlite3.c:507
      #7  0x00007ff97d957156 in fini_db (_conn_node=0x7ff92ca04470) at gfdb_data_store.c:326
      #8  0x00007ff97db78679 in notify (this=0x7ff92c5b3670, event=9, data=0x7ff92c5b5a00) at changetimerecorder.c:2178
      #9  0x00007ff98bca0dc2 in xlator_notify (xl=0x7ff92c5b3670, event=event@entry=9, data=data@entry=0x7ff92c5b5a00) at xlator.c:549
      #10 0x00007ff98bd3ac12 in default_notify (this=this@entry=0x7ff92c5b5a00, event=9, data=data@entry=0x7ff92c5b6d50) at defaults.c:3139

BUG: 1475632
Change-Id: Idd4bfdb4629c4799ac477ade81228065212683fb
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17888
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
mscherer pushed a commit that referenced this pull request Jul 31, 2017
…ction

Problem: Sometime brick process is getting crash in notify function at the
         time of cleanup db connection while brick mux is enabled.

Solution: In changetimerrecorder (ctr) notify function after cleanup
          db connection set to db_conn to NULL to avoid reuse the same
          db connection again.

Note: Below is the backtrace pattern showing by brick process
      #0  0x00007ff98a30c1f7 in raise () from /lib64/libc.so.6
      #1  0x00007ff98a30d8e8 in abort () from /lib64/libc.so.6
      #2  0x00007ff98a34bf47 in __libc_message () from /lib64/libc.so.6
      #3  0x00007ff98a351b54 in malloc_printerr () from /lib64/libc.so.6
      #4  0x00007ff98a3537aa in _int_free () from /lib64/libc.so.6
      #5  0x00007ff97d95e311 in gf_sql_connection_fini (sql_connection=sql_connection@entry=0x7ff8e8496b50) at gfdb_sqlite3.c:42
      #6  0x00007ff97d95e38a in gf_sqlite3_fini (db_conn=0x7ff92ca04470) at gfdb_sqlite3.c:507
      #7  0x00007ff97d957156 in fini_db (_conn_node=0x7ff92ca04470) at gfdb_data_store.c:326
      #8  0x00007ff97db78679 in notify (this=0x7ff92c5b3670, event=9, data=0x7ff92c5b5a00) at changetimerecorder.c:2178
      #9  0x00007ff98bca0dc2 in xlator_notify (xl=0x7ff92c5b3670, event=event@entry=9, data=data@entry=0x7ff92c5b5a00) at xlator.c:549
      #10 0x00007ff98bd3ac12 in default_notify (this=this@entry=0x7ff92c5b5a00, event=9, data=data@entry=0x7ff92c5b6d50) at defaults.c:3139

> BUG: 1475632
> Change-Id: Idd4bfdb4629c4799ac477ade81228065212683fb
> Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
> Reviewed-on: https://review.gluster.org/17888
> Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
> (cherry picked from commit fc0fce2)

BUG: 1476109
Change-Id: I96b7ab765b596cec5b779d7186ec549615e3b68b
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17902
Smoke: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
mscherer pushed a commit that referenced this pull request Aug 8, 2017
Refcounting added for nfs call state in https://review.gluster.org/17696.
This is based on assumption that call state won't NULL when it is freed.
But currently gluster nfs server is crashing in different scenarios at
nfs3_getattr() with following bt

#0  0x00007ff1cfea9205 in _gf_ref_put (ref=ref@entry=0x0) at refcount.c:36
#1  0x00007ff1c1997455 in nfs3_call_state_wipe (cs=cs@entry=0x0) at nfs3.c:559
#2  0x00007ff1c1998931 in nfs3_getattr (req=req@entry=0x7ff1bc0b26d0, fh=fh@entry=0x7ff1c2f76ae0) at nfs3.c:962
#3  0x00007ff1c1998c8a in nfs3svc_getattr (req=0x7ff1bc0b26d0) at nfs3.c:987
#4  0x00007ff1cfbfd8c5 in rpcsvc_handle_rpc_call (svc=0x7ff1bc03e500, trans=trans@entry=0x7ff1bc0c8020, msg=<optimized out>) at rpcsvc.c:695
#5  0x00007ff1cfbfdaab in rpcsvc_notify (trans=0x7ff1bc0c8020, mydata=<optimized out>, event=<optimized out>, data=<optimized out>) at rpcsvc.c:789
#6  0x00007ff1cfbff9e3 in rpc_transport_notify (this=this@entry=0x7ff1bc0c8020, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7ff1bc0038d0)
    at rpc-transport.c:538
#7  0x00007ff1c4a2e3d6 in socket_event_poll_in (this=this@entry=0x7ff1bc0c8020, notify_handled=<optimized out>) at socket.c:2306
#8  0x00007ff1c4a3097c in socket_event_handler (fd=21, idx=9, gen=19, data=0x7ff1bc0c8020, poll_in=1, poll_out=0, poll_err=0) at socket.c:2458
#9  0x00007ff1cfe950f6 in event_dispatch_epoll_handler (event=0x7ff1c2f76e80, event_pool=0x5618154d5ee0) at event-epoll.c:572
#10 event_dispatch_epoll_worker (data=0x56181551cbd0) at event-epoll.c:648
#11 0x00007ff1cec99e25 in start_thread () from /lib64/libpthread.so.0
#12 0x00007ff1ce56634d in clone () from /lib64/libc.so.6

This patch add previous NULL check move from __nfs3_call_state_wipe() to nfs3_call_state_wipe()

Change-Id: I2d73632f4be23f14d8467be3d908b09b3a2d87ea
BUG: 1479030
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://review.gluster.org/17989
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
mscherer pushed a commit that referenced this pull request Aug 8, 2017
Refcounting added for nfs call state in https://review.gluster.org/17696.
This is based on assumption that call state won't NULL when it is freed.
But currently gluster nfs server is crashing in different scenarios at
nfs3_getattr() with following bt

#0  0x00007ff1cfea9205 in _gf_ref_put (ref=ref@entry=0x0) at refcount.c:36
#1  0x00007ff1c1997455 in nfs3_call_state_wipe (cs=cs@entry=0x0) at nfs3.c:559
#2  0x00007ff1c1998931 in nfs3_getattr (req=req@entry=0x7ff1bc0b26d0, fh=fh@entry=0x7ff1c2f76ae0) at nfs3.c:962
#3  0x00007ff1c1998c8a in nfs3svc_getattr (req=0x7ff1bc0b26d0) at nfs3.c:987
#4  0x00007ff1cfbfd8c5 in rpcsvc_handle_rpc_call (svc=0x7ff1bc03e500, trans=trans@entry=0x7ff1bc0c8020, msg=<optimized out>) at rpcsvc.c:695
#5  0x00007ff1cfbfdaab in rpcsvc_notify (trans=0x7ff1bc0c8020, mydata=<optimized out>, event=<optimized out>, data=<optimized out>) at rpcsvc.c:789
#6  0x00007ff1cfbff9e3 in rpc_transport_notify (this=this@entry=0x7ff1bc0c8020, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7ff1bc0038d0)
    at rpc-transport.c:538
#7  0x00007ff1c4a2e3d6 in socket_event_poll_in (this=this@entry=0x7ff1bc0c8020, notify_handled=<optimized out>) at socket.c:2306
#8  0x00007ff1c4a3097c in socket_event_handler (fd=21, idx=9, gen=19, data=0x7ff1bc0c8020, poll_in=1, poll_out=0, poll_err=0) at socket.c:2458
#9  0x00007ff1cfe950f6 in event_dispatch_epoll_handler (event=0x7ff1c2f76e80, event_pool=0x5618154d5ee0) at event-epoll.c:572
#10 event_dispatch_epoll_worker (data=0x56181551cbd0) at event-epoll.c:648
#11 0x00007ff1cec99e25 in start_thread () from /lib64/libpthread.so.0
#12 0x00007ff1ce56634d in clone () from /lib64/libc.so.6

This patch add previous NULL check move from __nfs3_call_state_wipe() to
nfs3_call_state_wipe()

Cherry picked from commit 111d6bd:
> Change-Id: I2d73632f4be23f14d8467be3d908b09b3a2d87ea
> BUG: 1479030
> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
> Reviewed-on: https://review.gluster.org/17989
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Niels de Vos <ndevos@redhat.com>

Change-Id: I2d73632f4be23f14d8467be3d908b09b3a2d87ea
BUG: 1479263
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/17994
Reviewed-by: jiffin tony Thottan <jthottan@redhat.com>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
mscherer pushed a commit that referenced this pull request Aug 11, 2017
Refcounting added for nfs call state in https://review.gluster.org/17696.
This is based on assumption that call state won't NULL when it is freed.
But currently gluster nfs server is crashing in different scenarios at
nfs3_getattr() with following bt

#0  0x00007ff1cfea9205 in _gf_ref_put (ref=ref@entry=0x0) at refcount.c:36
#1  0x00007ff1c1997455 in nfs3_call_state_wipe (cs=cs@entry=0x0) at nfs3.c:559
#2  0x00007ff1c1998931 in nfs3_getattr (req=req@entry=0x7ff1bc0b26d0, fh=fh@entry=0x7ff1c2f76ae0) at nfs3.c:962
#3  0x00007ff1c1998c8a in nfs3svc_getattr (req=0x7ff1bc0b26d0) at nfs3.c:987
#4  0x00007ff1cfbfd8c5 in rpcsvc_handle_rpc_call (svc=0x7ff1bc03e500, trans=trans@entry=0x7ff1bc0c8020, msg=<optimized out>) at rpcsvc.c:695
#5  0x00007ff1cfbfdaab in rpcsvc_notify (trans=0x7ff1bc0c8020, mydata=<optimized out>, event=<optimized out>, data=<optimized out>) at rpcsvc.c:789
#6  0x00007ff1cfbff9e3 in rpc_transport_notify (this=this@entry=0x7ff1bc0c8020, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7ff1bc0038d0)
    at rpc-transport.c:538
#7  0x00007ff1c4a2e3d6 in socket_event_poll_in (this=this@entry=0x7ff1bc0c8020, notify_handled=<optimized out>) at socket.c:2306
#8  0x00007ff1c4a3097c in socket_event_handler (fd=21, idx=9, gen=19, data=0x7ff1bc0c8020, poll_in=1, poll_out=0, poll_err=0) at socket.c:2458
#9  0x00007ff1cfe950f6 in event_dispatch_epoll_handler (event=0x7ff1c2f76e80, event_pool=0x5618154d5ee0) at event-epoll.c:572
#10 event_dispatch_epoll_worker (data=0x56181551cbd0) at event-epoll.c:648
#11 0x00007ff1cec99e25 in start_thread () from /lib64/libpthread.so.0
#12 0x00007ff1ce56634d in clone () from /lib64/libc.so.6

This patch add previous NULL check move from __nfs3_call_state_wipe() to
nfs3_call_state_wipe()

Cherry picked from commit 111d6bd:
> Change-Id: I2d73632f4be23f14d8467be3d908b09b3a2d87ea
> BUG: 1479030
> Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
> Reviewed-on: https://review.gluster.org/17989
> Smoke: Gluster Build System <jenkins@build.gluster.org>
> CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
> Reviewed-by: Niels de Vos <ndevos@redhat.com>

Change-Id: I2d73632f4be23f14d8467be3d908b09b3a2d87ea
BUG: 1480594
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-on: https://review.gluster.org/18027
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
amarts referenced this pull request in amarts/glusterfs_fork Oct 31, 2017
…ction

Problem: Sometime brick process is getting crash in notify function at the
         time of cleanup db connection while brick mux is enabled.

Solution: In changetimerrecorder (ctr) notify function after cleanup
          db connection set to db_conn to NULL to avoid reuse the same
          db connection again.

Note: Below is the backtrace pattern showing by brick process
      #0  0x00007ff98a30c1f7 in raise () from /lib64/libc.so.6
      #1  0x00007ff98a30d8e8 in abort () from /lib64/libc.so.6
      #2  0x00007ff98a34bf47 in __libc_message () from /lib64/libc.so.6
      #3  0x00007ff98a351b54 in malloc_printerr () from /lib64/libc.so.6
      #4  0x00007ff98a3537aa in _int_free () from /lib64/libc.so.6
      #5  0x00007ff97d95e311 in gf_sql_connection_fini (sql_connection=sql_connection@entry=0x7ff8e8496b50) at gfdb_sqlite3.c:42
      gluster#6  0x00007ff97d95e38a in gf_sqlite3_fini (db_conn=0x7ff92ca04470) at gfdb_sqlite3.c:507
      gluster#7  0x00007ff97d957156 in fini_db (_conn_node=0x7ff92ca04470) at gfdb_data_store.c:326
      gluster#8  0x00007ff97db78679 in notify (this=0x7ff92c5b3670, event=9, data=0x7ff92c5b5a00) at changetimerecorder.c:2178
      gluster#9  0x00007ff98bca0dc2 in xlator_notify (xl=0x7ff92c5b3670, event=event@entry=9, data=data@entry=0x7ff92c5b5a00) at xlator.c:549
      gluster#10 0x00007ff98bd3ac12 in default_notify (this=this@entry=0x7ff92c5b5a00, event=9, data=data@entry=0x7ff92c5b6d50) at defaults.c:3139

BUG: 1475632
Change-Id: Idd4bfdb4629c4799ac477ade81228065212683fb
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
Reviewed-on: https://review.gluster.org/17888
Reviewed-by: Jeff Darcy <jeff@pl.atyp.us>
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
amarts referenced this pull request in amarts/glusterfs_fork Oct 31, 2017
Refcounting added for nfs call state in https://review.gluster.org/17696.
This is based on assumption that call state won't NULL when it is freed.
But currently gluster nfs server is crashing in different scenarios at
nfs3_getattr() with following bt

#0  0x00007ff1cfea9205 in _gf_ref_put (ref=ref@entry=0x0) at refcount.c:36
#1  0x00007ff1c1997455 in nfs3_call_state_wipe (cs=cs@entry=0x0) at nfs3.c:559
#2  0x00007ff1c1998931 in nfs3_getattr (req=req@entry=0x7ff1bc0b26d0, fh=fh@entry=0x7ff1c2f76ae0) at nfs3.c:962
#3  0x00007ff1c1998c8a in nfs3svc_getattr (req=0x7ff1bc0b26d0) at nfs3.c:987
#4  0x00007ff1cfbfd8c5 in rpcsvc_handle_rpc_call (svc=0x7ff1bc03e500, trans=trans@entry=0x7ff1bc0c8020, msg=<optimized out>) at rpcsvc.c:695
#5  0x00007ff1cfbfdaab in rpcsvc_notify (trans=0x7ff1bc0c8020, mydata=<optimized out>, event=<optimized out>, data=<optimized out>) at rpcsvc.c:789
gluster#6  0x00007ff1cfbff9e3 in rpc_transport_notify (this=this@entry=0x7ff1bc0c8020, event=event@entry=RPC_TRANSPORT_MSG_RECEIVED, data=data@entry=0x7ff1bc0038d0)
    at rpc-transport.c:538
gluster#7  0x00007ff1c4a2e3d6 in socket_event_poll_in (this=this@entry=0x7ff1bc0c8020, notify_handled=<optimized out>) at socket.c:2306
gluster#8  0x00007ff1c4a3097c in socket_event_handler (fd=21, idx=9, gen=19, data=0x7ff1bc0c8020, poll_in=1, poll_out=0, poll_err=0) at socket.c:2458
gluster#9  0x00007ff1cfe950f6 in event_dispatch_epoll_handler (event=0x7ff1c2f76e80, event_pool=0x5618154d5ee0) at event-epoll.c:572
gluster#10 event_dispatch_epoll_worker (data=0x56181551cbd0) at event-epoll.c:648
gluster#11 0x00007ff1cec99e25 in start_thread () from /lib64/libpthread.so.0
gluster#12 0x00007ff1ce56634d in clone () from /lib64/libc.so.6

This patch add previous NULL check move from __nfs3_call_state_wipe() to nfs3_call_state_wipe()

Change-Id: I2d73632f4be23f14d8467be3d908b09b3a2d87ea
BUG: 1479030
Signed-off-by: Jiffin Tony Thottan <jthottan@redhat.com>
Reviewed-on: https://review.gluster.org/17989
Smoke: Gluster Build System <jenkins@build.gluster.org>
CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
gluster-ant pushed a commit that referenced this pull request Oct 16, 2018
Fixes below trace of ASan:

Direct leak of 130 byte(s) in 1 object(s) allocated from:
    #0 0x7fa794bb5850 in malloc (/lib64/libasan.so.4+0xde850)
    #1 0x7fa7944e5de9 in __gf_malloc ../../../libglusterfs/src/mem-pool.c:136
    #2 0x40b85c in gf_strndup ../../../libglusterfs/src/mem-pool.h:166
    #3 0x40b85c in gf_strdup ../../../libglusterfs/src/mem-pool.h:183
    #4 0x40b85c in parse_opts ../../../glusterfsd/src/glusterfsd.c:1049
    #5 0x7fa792a98720 in argp_parse (/lib64/libc.so.6+0x101720)
    #6 0x40d89f in parse_cmdline ../../../glusterfsd/src/glusterfsd.c:2041
    #7 0x406d07 in main ../../../glusterfsd/src/glusterfsd.c:2625


updates: bz#1633930
Change-Id: I394b3fc24b7a994c1b03635cb5e973e7290491d3
Signed-off-by: Amar Tumballi <amarts@redhat.com>
gluster-ant pushed a commit that referenced this pull request Nov 28, 2018
Direct leak of 609960 byte(s) in 4485 object(s) allocated from:
    #0 0x7f0d719bea50 in __interceptor_calloc (/lib64/libasan.so.5+0xefa50)
    #1 0x7f0d716dc08f in __gf_calloc ../../../libglusterfs/src/mem-pool.c:111
    #2 0x7f0d5d41d9b2 in __posix_get_mdata_xattr ../../../../../xlators/storage/posix/src/posix-metadata.c:240
    #3 0x7f0d5d41dd6b in posix_get_mdata_xattr ../../../../../xlators/storage/posix/src/posix-metadata.c:317
    #4 0x7f0d5d39e855 in posix_fdstat ../../../../../xlators/storage/posix/src/posix-helpers.c:685
    #5 0x7f0d5d3d65ec in posix_create ../../../../../xlators/storage/posix/src/posix-entry-ops.c:2173

Direct leak of 609960 byte(s) in 4485 object(s) allocated from:
    #0 0x7f0d719bea50 in __interceptor_calloc (/lib64/libasan.so.5+0xefa50)
    #1 0x7f0d716dc08f in __gf_calloc ../../../libglusterfs/src/mem-pool.c:111
    #2 0x7f0d5d41ced2 in posix_set_mdata_xattr ../../../../../xlators/storage/posix/src/posix-metadata.c:359
    #3 0x7f0d5d41e70f in posix_set_ctime ../../../../../xlators/storage/posix/src/posix-metadata.c:616
    #4 0x7f0d5d3d662c in posix_create ../../../../../xlators/storage/posix/src/posix-entry-ops.c:2181

We were freeing only the first context in inode during forget, and not the second.

updates: bz#1633930
Change-Id: Ib61b4453aa3d2039d6ce660f52ef45390539b9db
Signed-off-by: Amar Tumballi <amarts@redhat.com>
gluster-ant pushed a commit that referenced this pull request Dec 6, 2018
Tracebacks:

Direct leak of 96 byte(s) in 1 object(s) allocated from:
 #0 0x7f3acf9eac48 in malloc (/lib64/libasan.so.5+0xeec48)
 #1 0x7f3acf510949 in __gf_malloc ./libglusterfs/src/mem-pool.c:136
 #2 0x7f3acf5111bb in gf_vasprintf ./libglusterfs/src/mem-pool.c:236
 #3 0x7f3acf51138a in gf_asprintf ./libglusterfs/src/mem-pool.c:256
 #4 0x421611 in cli_cmd_volume_set_cbk ./cli/src/cli-cmd-volume.c:868
 #5 0x410599 in cli_cmd_process ./cli/src/cli-cmd.c:135
 #6 0x40f90d in cli_batch ./cli/src/input.c:29
 #7 0x7f3acd78c593 in start_thread pthread_create.c:463

Direct leak of 73 byte(s) in 1 object(s) allocated from:
 #0 0x7f3acf9eac48 in malloc (/lib64/libasan.so.5+0xeec48)
 #1 0x7f3acf510949 in __gf_malloc ./libglusterfs/src/mem-pool.c:136
 #2 0x421519 in gf_strndup ../../libglusterfs/src/mem-pool.h:167
 #3 0x421519 in gf_strdup ../../libglusterfs/src/mem-pool.h:184
 #4 0x421519 in cli_cmd_volume_set_cbk cli/src/cli-cmd-volume.c:859
 #5 0x410599 in cli_cmd_process cli/src/cli-cmd.c:135
 #6 0x40f90d in cli_batch cli/src/input.c:29
 #7 0x7f3acd78c593 in start_thread pthread_create.c:463

Change-Id: I3312751c1e3178672360a678fe15b1f7f1054b22
updates: bz#1633930
Signed-off-by: Kotresh HR <khiremat@redhat.com>
gluster-ant pushed a commit that referenced this pull request Dec 10, 2018
This patch fixes ememory leak reported by ASan.

Tracebacks:
Direct leak of 84 byte(s) in 1 object(s) allocated from:
    #0 0x7f71ea107848 in __interceptor_malloc (/lib64/libasan.so.5+0xef848)
    #1 0x7f71e9e2ac49 in __gf_malloc ./libglusterfs/src/mem-pool.c:136
    #2 0x7f71e9e2b4bb in gf_vasprintf ./libglusterfs/src/mem-pool.c:236
    #3 0x7f71e9e2b68a in gf_asprintf ./libglusterfs/src/mem-pool.c:256
    #4 0x41e8ec in cli_cmd_bitrot_cbk ./cli/src/cli-cmd-volume.c:1847
    #5 0x410b39 in cli_cmd_process ./cli/src/cli-cmd.c:137
    #6 0x40fe9d in cli_batch ./cli/src/input.c:29
    #7 0x7f71e989558d in start_thread (/lib64/libpthread.so.0+0x858d)

updates: bz#1633930
Change-Id: I8977e45add742e67047291f398f0ee79eb09afe4
Signed-off-by: Sunny Kumar <sunkumar@redhat.com>
gluster-ant pushed a commit that referenced this pull request Dec 17, 2018
Traceback:

Direct leak of 765 byte(s) in 9 object(s) allocated from:
    #0 0x7ffb9cad2c48 in malloc (/lib64/libasan.so.5+0xeec48)
    #1 0x7ffb9c5f8949 in __gf_malloc ./libglusterfs/src/mem-pool.c:136
    #2 0x7ffb9c5f91bb in gf_vasprintf ./libglusterfs/src/mem-pool.c:236
    #3 0x7ffb9c5f938a in gf_asprintf ./libglusterfs/src/mem-pool.c:256
    #4 0x7ffb826714ab in afr_get_heal_info ./xlators/cluster/afr/src/afr-common.c:6204
    #5 0x7ffb825765e5 in afr_handle_heal_xattrs ./xlators/cluster/afr/src/afr-inode-read.c:1481
    #6 0x7ffb825765e5 in afr_getxattr ./xlators/cluster/afr/src/afr-inode-read.c:1571
    #7 0x7ffb9c635af7 in syncop_getxattr ./libglusterfs/src/syncop.c:1680
    #8 0x406c78 in glfsh_process_entries ./heal/src/glfs-heal.c:810
    #9 0x408555 in glfsh_crawl_directory ./heal/src/glfs-heal.c:898
    #10 0x408cc0 in glfsh_print_pending_heals_type ./heal/src/glfs-heal.c:970
    #11 0x408fc5 in glfsh_print_pending_heals ./heal/src/glfs-heal.c:1012
    #12 0x409546 in glfsh_gather_heal_info ./heal/src/glfs-heal.c:1154
    #13 0x403e96 in main ./heal/src/glfs-heal.c:1745
    #14 0x7ffb99bc411a in __libc_start_main ../csu/libc-start.c:308

The dictionary is referenced by caller to print the status.
So set it as dynstr, the last unref of dictionary will free it.

updates: bz#1633930
Change-Id: Ib5a7cb891e6f7d90560859aaf6239e52ff5477d0
Signed-off-by: Kotresh HR <khiremat@redhat.com>
amarts pushed a commit to gluster/build-jobs that referenced this pull request Jul 3, 2019
This job will comment on issues from the commit message

Change-Id: I689a2ccf6bb7fefeffa078aed36cc051c66504b6
Updates: #20
Fixes: #8
Fixes: gluster/glusterfs#5
xhernandez pushed a commit that referenced this pull request Mar 4, 2022
Unconditionally free serialized dict data
in '__glusterd_send_svc_configure_req()'.

Found with AddressSanitizer:

==273334==ERROR: LeakSanitizer: detected memory leaks

Direct leak of 89 byte(s) in 1 object(s) allocated from:
    #0 0x7fc2ce2a293f in __interceptor_malloc (/lib64/libasan.so.6+0xae93f)
    #1 0x7fc2cdff9c6c in __gf_malloc libglusterfs/src/mem-pool.c:201
    #2 0x7fc2cdff9c6c in __gf_malloc libglusterfs/src/mem-pool.c:188
    #3 0x7fc2cdf86bde in dict_allocate_and_serialize libglusterfs/src/dict.c:3285
    #4 0x7fc2b8398843 in __glusterd_send_svc_configure_req xlators/mgmt/glusterd/src/glusterd-svc-helper.c:830
    #5 0x7fc2b8399238 in glusterd_attach_svc xlators/mgmt/glusterd/src/glusterd-svc-helper.c:932
    #6 0x7fc2b83a60f1 in glusterd_shdsvc_start xlators/mgmt/glusterd/src/glusterd-shd-svc.c:509
    #7 0x7fc2b83a5124 in glusterd_shdsvc_manager xlators/mgmt/glusterd/src/glusterd-shd-svc.c:335
    #8 0x7fc2b8395364 in glusterd_svcs_manager xlators/mgmt/glusterd/src/glusterd-svc-helper.c:143
    #9 0x7fc2b82e3a6c in glusterd_op_start_volume xlators/mgmt/glusterd/src/glusterd-volume-ops.c:2412
    #10 0x7fc2b835ec5a in gd_mgmt_v3_commit_fn xlators/mgmt/glusterd/src/glusterd-mgmt.c:329
    #11 0x7fc2b8365497 in glusterd_mgmt_v3_commit xlators/mgmt/glusterd/src/glusterd-mgmt.c:1639
    #12 0x7fc2b836ad30 in glusterd_mgmt_v3_initiate_all_phases xlators/mgmt/glusterd/src/glusterd-mgmt.c:2651
    #13 0x7fc2b82d504b in __glusterd_handle_cli_start_volume xlators/mgmt/glusterd/src/glusterd-volume-ops.c:364
    #14 0x7fc2b817465c in glusterd_big_locked_handler xlators/mgmt/glusterd/src/glusterd-handler.c:79
    #15 0x7fc2ce020ff9 in synctask_wrap libglusterfs/src/syncop.c:385
    #16 0x7fc2cd69184f  (/lib64/libc.so.6+0x5784f)

Signed-off-by: Dmitry Antipov <dantipov@cloudlinux.com>
Fixes: #1000
mohit84 added a commit to mohit84/glusterfs that referenced this pull request Oct 20, 2023
The client is throwing below stacktrace while asan is enabled.
The client is facing an issue while application is trying
to call removexattr in 2x1 subvol and non-mds subvol is down.
As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to
wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying
to access frame->local that;s why it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544)
freed by thread T7 here:
    #0 0x7f916ccb9388 in __interceptor_free.part.0 (/lib64/libasan.so.8+0xb9388)
    GlusterFS#1 0x7f91654af204 in dht_local_wipe /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-helper.c:713
    gluster#2 0x7f91654af204 in dht_setxattr_non_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3900
    gluster#3 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    gluster#4 0x7f91694ba26f in client_submit_request /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:288
    gluster#5 0x7f91695021bd in client4_0_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4480
    gluster#6 0x7f91694a5f56 in client_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:1439
    gluster#7 0x7f91654a1161 in dht_setxattr_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3979
    gluster#8 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    gluster#9 0x7f916cbc4340 in rpc_clnt_handle_reply /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
    gluster#10 0x7f916cbc4340 in rpc_clnt_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
    gluster#11 0x7f916cbb7ec5 in rpc_transport_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-transport.c:504
    gluster#12 0x7f916a1aa5fa in socket_event_poll_in_async /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
    gluster#13 0x7f916a1bd7c2 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    gluster#14 0x7f916a1bd7c2 in socket_event_poll_in /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
    gluster#15 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
    gluster#16 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
    gluster#17 0x7f916c946d22 in event_dispatch_epoll_handler /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:614
    gluster#18 0x7f916c946d22 in event_dispatch_epoll_worker /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:725
    gluster#19 0x7f916be8cdec in start_thread (/lib64/libc.so.6+0x8cdec)

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht
          subvol.

Fixes: gluster#3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677
Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
mohit84 added a commit that referenced this pull request Oct 23, 2023
The client is throwing below stacktrace while asan is enabled.
The client is facing an issue while application is trying
to call removexattr in 2x1 subvol and non-mds subvol is down.
As we can see in below stacktrace dht_setxattr_mds_cbk is calling
dht_setxattr_non_mds_cbk and dht_setxattr_non_mds_cbk is trying to
wipe local because call_cnt is 0 but dht_setxattr_mds_cbk is trying
to access frame->local that;s why it is crashed.

x621000051c34 is located 1844 bytes inside of 4164-byte region [0x621000051500,0x621000052544)
freed by thread T7 here:
    #0 0x7f916ccb9388 in __interceptor_free.part.0 (/lib64/libasan.so.8+0xb9388)
    #1 0x7f91654af204 in dht_local_wipe /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-helper.c:713
    #2 0x7f91654af204 in dht_setxattr_non_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3900
    #3 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    #4 0x7f91694ba26f in client_submit_request /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:288
    #5 0x7f91695021bd in client4_0_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:4480
    #6 0x7f91694a5f56 in client_removexattr /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client.c:1439
    #7 0x7f91654a1161 in dht_setxattr_mds_cbk /root/glusterfs_new/glusterfs/xlators/cluster/dht/src/dht-common.c:3979
    #8 0x7f91694c1f42 in client4_0_removexattr_cbk /root/glusterfs_new/glusterfs/xlators/protocol/client/src/client-rpc-fops_v2.c:1061
    #9 0x7f916cbc4340 in rpc_clnt_handle_reply /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:723
    #10 0x7f916cbc4340 in rpc_clnt_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-clnt.c:890
    #11 0x7f916cbb7ec5 in rpc_transport_notify /root/glusterfs_new/glusterfs/rpc/rpc-lib/src/rpc-transport.c:504
    #12 0x7f916a1aa5fa in socket_event_poll_in_async /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2358
    #13 0x7f916a1bd7c2 in gf_async ../../../../libglusterfs/src/glusterfs/async.h:187
    #14 0x7f916a1bd7c2 in socket_event_poll_in /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2399
    #15 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2790
    #16 0x7f916a1bd7c2 in socket_event_handler /root/glusterfs_new/glusterfs/rpc/rpc-transport/socket/src/socket.c:2710
    #17 0x7f916c946d22 in event_dispatch_epoll_handler /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:614
    #18 0x7f916c946d22 in event_dispatch_epoll_worker /root/glusterfs_new/glusterfs/libglusterfs/src/event-epoll.c:725
    #19 0x7f916be8cdec in start_thread (/lib64/libc.so.6+0x8cdec)

Solution: Use switch instead of using if statement to wind a operation, in case of switch
          the code will not try to access local after wind a operation for last dht
          subvol.

Fixes: #3732
Change-Id: I031bc814d6df98058430ef4de7040e3370d1c677

Signed-off-by: Mohit Agrawal <moagrawa@redhat.com>
chen1195585098 pushed a commit to chen1195585098/glusterfs that referenced this pull request Jun 25, 2024
In glusterd-store.c, there are while loops like:

gf_store_iter_get_next(iter, &key, &value, &op_errno);
while (!ret) {
    if (xx_condition) {
        do_something();
        goto out;
    }
    GF_FREE(key);
    GF_FREE(value);
    key = NULL;
    value = NULL;

    ret = gf_store_iter_get_next(iter, &key, &value, &op_errno);

}
It's ok in normal case, howerver, once the condition does not meet
and the procedure goto 'out', there will be memory leak.

Hence, it is necessary to put a check at 'out'.

Similar leaks will be triggered in glusterd_store_retrieve_peers.
If no peerinfo is found, the procedure will goto the next loop.
It means memory allocated for key & value will be leaked once
gf_store_iter_get_next is called again in the next loop.

This patch fixes above mentioned memory leaks, and that detected by
asan.

Direct leak of 11430 byte(s) in 150 object(s) allocated from:
    #0 0x7f59844efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
    gluster#1 0x7f5983aeb96d in __gf_malloc (/lib64/libglusterfs.so.0+0xeb96d)
    gluster#2 0x7f59775b569b in glusterd_store_update_volinfo ../../../../libglusterfs/src/glusterfs/mem-pool.h:170
    gluster#3 0x7f59775be3b5 in glusterd_store_retrieve_volume glusterd-store.c:3334
    gluster#4 0x7f59775bf076 in glusterd_store_retrieve_volumes glusterd-store.c:3571
    gluster#5 0x7f59775bfc9e in glusterd_restore glusterd-store.c:4913
    gluster#6 0x7f59774ca5a1  (/usr/lib64/glusterfs/8.2/xlator/mgmt/glusterd.so+0xca5a1)
    gluster#7 0x7f5983a7cb6b in __xlator_init xlator.c:594
    gluster#8 0x7f5983b0c5d0 in glusterfs_graph_init graph.c:422
    gluster#9 0x7f5983b0d422 in glusterfs_graph_activate (/lib64/libglusterfs.so.0+0x10d422)
    gluster#10 0x5605f2e1eff5 in glusterfs_process_volfp glusterfsd.c:2506
    gluster#11 0x5605f2e1f238 in glusterfs_volumes_init glusterfsd.c:2577
    gluster#12 0x5605f2e15d8d in main (/usr/sbin/glusterfsd+0x15d8d)
    gluster#13 0x7f598103acf2 in __libc_start_main (/lib64/libc.so.6+0x3acf2)
    gluster#14 0x5605f2e162cd in _start (/usr/sbin/glusterfsd+0x162cd)

Direct leak of 3351 byte(s) in 30 object(s) allocated from:
    #0 0x7f59844efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
    gluster#1 0x7f5983aeb96d in __gf_malloc (/lib64/libglusterfs.so.0+0xeb96d)
    gluster#2 0x7f59775541e7 in glusterd_is_path_in_use ../../../../libglusterfs/src/glusterfs/mem-pool.h:170
    gluster#3 0x7f59775541e7 in glusterd_check_and_set_brick_xattr glusterd-utils.c:8203
    gluster#4 0x7f5977554d7c in glusterd_validate_and_create_brickpath glusterd-utils.c:1549
    gluster#5 0x7f5977645fcb in glusterd_op_stage_create_volume glusterd-volume-ops.c:1260
    gluster#6 0x7f5977519025 in glusterd_op_stage_validate glusterd-op-sm.c:5787
    gluster#7 0x7f5977672555 in gd_stage_op_phase glusterd-syncop.c:1297
    gluster#8 0x7f5977676db0 in gd_sync_task_begin glusterd-syncop.c:1951
    gluster#9 0x7f59776775dc in glusterd_op_begin_synctask glusterd-syncop.c:2016
    gluster#10 0x7f5977642bd6 in __glusterd_handle_create_volume glusterd-volume-ops.c:506
    gluster#11 0x7f59774e27b1 in glusterd_big_locked_handler glusterd-handler.c:83
    gluster#12 0x7f5983b14cac in synctask_wrap syncop.c:353
    gluster#13 0x7f59810240af  (/lib64/libc.so.6+0x240af)

Fixes: gluster#4383
Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>
chen1195585098 pushed a commit to chen1195585098/glusterfs that referenced this pull request Jun 26, 2024
Memory leaks detected by setting --enable-asan, compile, install
and run gluster cmds, such as gluster v create/start/stop, etc.

Direct leak of 11430 byte(s) in 150 object(s) allocated from:
     #0 0x7f59844efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
     gluster#1 0x7f5983aeb96d in __gf_malloc (/lib64/libglusterfs.so.0+0xeb96d)
     gluster#2 0x7f59775b569b in glusterd_store_update_volinfo ../../../../libglusterfs/src/glusterfs/mem-pool.h:170
     gluster#3 0x7f59775be3b5 in glusterd_store_retrieve_volume glusterd-store.c:3334
     gluster#4 0x7f59775bf076 in glusterd_store_retrieve_volumes glusterd-store.c:3571
     gluster#5 0x7f59775bfc9e in glusterd_restore glusterd-store.c:4913
     gluster#6 0x7f59774ca5a1  (/usr/lib64/glusterfs/8.2/xlator/mgmt/glusterd.so+0xca5a1)
     gluster#7 0x7f5983a7cb6b in __xlator_init xlator.c:594
     gluster#8 0x7f5983b0c5d0 in glusterfs_graph_init graph.c:422
     gluster#9 0x7f5983b0d422 in glusterfs_graph_activate (/lib64/libglusterfs.so.0+0x10d422)
     gluster#10 0x5605f2e1eff5 in glusterfs_process_volfp glusterfsd.c:2506
     gluster#11 0x5605f2e1f238 in glusterfs_volumes_init glusterfsd.c:2577
     gluster#12 0x5605f2e15d8d in main (/usr/sbin/glusterfsd+0x15d8d)
     gluster#13 0x7f598103acf2 in __libc_start_main (/lib64/libc.so.6+0x3acf2)
     gluster#14 0x5605f2e162cd in _start (/usr/sbin/glusterfsd+0x162cd)
In glusterd_store_update_volinfo, memory will be leaked when the dynamic memory is put
into a dict by calling dict_set_str:
ret = dict_set_str(volinfo->dict, key, gf_strdup(value));

Direct leak of 3351 byte(s) in 30 object(s) allocated from:
     #0 0x7f59844efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
     gluster#1 0x7f5983aeb96d in __gf_malloc (/lib64/libglusterfs.so.0+0xeb96d)
     gluster#2 0x7f59775541e7 in glusterd_is_path_in_use ../../../../libglusterfs/src/glusterfs/mem-pool.h:170
     gluster#3 0x7f59775541e7 in glusterd_check_and_set_brick_xattr glusterd-utils.c:8203
     gluster#4 0x7f5977554d7c in glusterd_validate_and_create_brickpath glusterd-utils.c:1549
     gluster#5 0x7f5977645fcb in glusterd_op_stage_create_volume glusterd-volume-ops.c:1260
     gluster#6 0x7f5977519025 in glusterd_op_stage_validate glusterd-op-sm.c:5787
     gluster#7 0x7f5977672555 in gd_stage_op_phase glusterd-syncop.c:1297
     gluster#8 0x7f5977676db0 in gd_sync_task_begin glusterd-syncop.c:1951
     gluster#9 0x7f59776775dc in glusterd_op_begin_synctask glusterd-syncop.c:2016
     gluster#10 0x7f5977642bd6 in __glusterd_handle_create_volume glusterd-volume-ops.c:506
     gluster#11 0x7f59774e27b1 in glusterd_big_locked_handler glusterd-handler.c:83
     gluster#12 0x7f5983b14cac in synctask_wrap syncop.c:353
     gluster#13 0x7f59810240af  (/lib64/libc.so.6+0x240af)
During volume creation, glusterd_is_path_in_use will be called to check brick path reuse.
Under normal circumstances, there is no problem, however, when a `force` cmd is given during
volume creation and the futher sys_lsetxattr failed, the memory area previously pointed by
*op_errstr will be leakd, cause:
out:
    if (strlen(msg))
        *op_errstr = gf_strdup(msg);

Similar leak also exists in posix_cs_set_state:
value = GF_CALLOC(1, xattrsize + 1, gf_posix_mt_char);
...
dict_set_str_sizen(*rsp, GF_CS_OBJECT_REMOTE, value);

Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>
mohit84 pushed a commit that referenced this pull request Jul 3, 2024
* glusterd: fix memory leaks detected by asan

Memory leaks detected by setting --enable-asan, compile, install
and run gluster cmds, such as gluster v create/start/stop, etc.

Direct leak of 11430 byte(s) in 150 object(s) allocated from:
     #0 0x7f59844efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
     #1 0x7f5983aeb96d in __gf_malloc (/lib64/libglusterfs.so.0+0xeb96d)
     #2 0x7f59775b569b in glusterd_store_update_volinfo ../../../../libglusterfs/src/glusterfs/mem-pool.h:170
     #3 0x7f59775be3b5 in glusterd_store_retrieve_volume glusterd-store.c:3334
     #4 0x7f59775bf076 in glusterd_store_retrieve_volumes glusterd-store.c:3571
     #5 0x7f59775bfc9e in glusterd_restore glusterd-store.c:4913
     #6 0x7f59774ca5a1  (/usr/lib64/glusterfs/8.2/xlator/mgmt/glusterd.so+0xca5a1)
     #7 0x7f5983a7cb6b in __xlator_init xlator.c:594
     #8 0x7f5983b0c5d0 in glusterfs_graph_init graph.c:422
     #9 0x7f5983b0d422 in glusterfs_graph_activate (/lib64/libglusterfs.so.0+0x10d422)
     #10 0x5605f2e1eff5 in glusterfs_process_volfp glusterfsd.c:2506
     #11 0x5605f2e1f238 in glusterfs_volumes_init glusterfsd.c:2577
     #12 0x5605f2e15d8d in main (/usr/sbin/glusterfsd+0x15d8d)
     #13 0x7f598103acf2 in __libc_start_main (/lib64/libc.so.6+0x3acf2)
     #14 0x5605f2e162cd in _start (/usr/sbin/glusterfsd+0x162cd)
In glusterd_store_update_volinfo, memory will be leaked when the dynamic memory is put
into a dict by calling dict_set_str:
ret = dict_set_str(volinfo->dict, key, gf_strdup(value));

Direct leak of 3351 byte(s) in 30 object(s) allocated from:
     #0 0x7f59844efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
     #1 0x7f5983aeb96d in __gf_malloc (/lib64/libglusterfs.so.0+0xeb96d)
     #2 0x7f59775541e7 in glusterd_is_path_in_use ../../../../libglusterfs/src/glusterfs/mem-pool.h:170
     #3 0x7f59775541e7 in glusterd_check_and_set_brick_xattr glusterd-utils.c:8203
     #4 0x7f5977554d7c in glusterd_validate_and_create_brickpath glusterd-utils.c:1549
     #5 0x7f5977645fcb in glusterd_op_stage_create_volume glusterd-volume-ops.c:1260
     #6 0x7f5977519025 in glusterd_op_stage_validate glusterd-op-sm.c:5787
     #7 0x7f5977672555 in gd_stage_op_phase glusterd-syncop.c:1297
     #8 0x7f5977676db0 in gd_sync_task_begin glusterd-syncop.c:1951
     #9 0x7f59776775dc in glusterd_op_begin_synctask glusterd-syncop.c:2016
     #10 0x7f5977642bd6 in __glusterd_handle_create_volume glusterd-volume-ops.c:506
     #11 0x7f59774e27b1 in glusterd_big_locked_handler glusterd-handler.c:83
     #12 0x7f5983b14cac in synctask_wrap syncop.c:353
     #13 0x7f59810240af  (/lib64/libc.so.6+0x240af)
During volume creation, glusterd_is_path_in_use will be called to check brick path reuse.
Under normal circumstances, there is no problem, however, when a `force` cmd is given during
volume creation and the futher sys_lsetxattr failed, the memory area previously pointed by
*op_errstr will be leakd, cause:
out:
    if (strlen(msg))
        *op_errstr = gf_strdup(msg);

Similar leak also exists in posix_cs_set_state:
value = GF_CALLOC(1, xattrsize + 1, gf_posix_mt_char);
...
dict_set_str_sizen(*rsp, GF_CS_OBJECT_REMOTE, value);

Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>

* glusterd: fix memory leaks due to lack of GF_FREE

In glusterd-store.c, there are while loops like:

gf_store_iter_get_next(iter, &key, &value, &op_errno);
while (!ret) {
    if (xx_condition) {
        do_something();
        goto out;
    }
    GF_FREE(key);
    GF_FREE(value);
    key = NULL;
    value = NULL;

    ret = gf_store_iter_get_next(iter, &key, &value, &op_errno);

}
It's ok under normal case, howerver, once the condition does not meet
and the procedure goto 'out', there will be memory leak.

Hence, it is necessary to put a check at 'out'.

Similar leaks will be triggered in glusterd_store_retrieve_peers.
If no peerinfo is found, the procedure will goto the next loop.
It means memory previously allocated for key & value will be
leaked once gf_store_iter_get_next is called again in the next loop.

Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>

---------

Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>
Co-authored-by: chenjinhao <chen.jinhao@zte.com.cn>
@l392zhan l392zhan mentioned this pull request Jul 9, 2024
ThalesBarretto pushed a commit to ThalesBarretto/glusterfs that referenced this pull request Feb 10, 2025
* glusterd: fix memory leaks detected by asan

Memory leaks detected by setting --enable-asan, compile, install
and run gluster cmds, such as gluster v create/start/stop, etc.

Direct leak of 11430 byte(s) in 150 object(s) allocated from:
     #0 0x7f59844efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
     gluster#1 0x7f5983aeb96d in __gf_malloc (/lib64/libglusterfs.so.0+0xeb96d)
     gluster#2 0x7f59775b569b in glusterd_store_update_volinfo ../../../../libglusterfs/src/glusterfs/mem-pool.h:170
     gluster#3 0x7f59775be3b5 in glusterd_store_retrieve_volume glusterd-store.c:3334
     gluster#4 0x7f59775bf076 in glusterd_store_retrieve_volumes glusterd-store.c:3571
     gluster#5 0x7f59775bfc9e in glusterd_restore glusterd-store.c:4913
     gluster#6 0x7f59774ca5a1  (/usr/lib64/glusterfs/8.2/xlator/mgmt/glusterd.so+0xca5a1)
     gluster#7 0x7f5983a7cb6b in __xlator_init xlator.c:594
     gluster#8 0x7f5983b0c5d0 in glusterfs_graph_init graph.c:422
     gluster#9 0x7f5983b0d422 in glusterfs_graph_activate (/lib64/libglusterfs.so.0+0x10d422)
     gluster#10 0x5605f2e1eff5 in glusterfs_process_volfp glusterfsd.c:2506
     gluster#11 0x5605f2e1f238 in glusterfs_volumes_init glusterfsd.c:2577
     gluster#12 0x5605f2e15d8d in main (/usr/sbin/glusterfsd+0x15d8d)
     gluster#13 0x7f598103acf2 in __libc_start_main (/lib64/libc.so.6+0x3acf2)
     gluster#14 0x5605f2e162cd in _start (/usr/sbin/glusterfsd+0x162cd)
In glusterd_store_update_volinfo, memory will be leaked when the dynamic memory is put
into a dict by calling dict_set_str:
ret = dict_set_str(volinfo->dict, key, gf_strdup(value));

Direct leak of 3351 byte(s) in 30 object(s) allocated from:
     #0 0x7f59844efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
     gluster#1 0x7f5983aeb96d in __gf_malloc (/lib64/libglusterfs.so.0+0xeb96d)
     gluster#2 0x7f59775541e7 in glusterd_is_path_in_use ../../../../libglusterfs/src/glusterfs/mem-pool.h:170
     gluster#3 0x7f59775541e7 in glusterd_check_and_set_brick_xattr glusterd-utils.c:8203
     gluster#4 0x7f5977554d7c in glusterd_validate_and_create_brickpath glusterd-utils.c:1549
     gluster#5 0x7f5977645fcb in glusterd_op_stage_create_volume glusterd-volume-ops.c:1260
     gluster#6 0x7f5977519025 in glusterd_op_stage_validate glusterd-op-sm.c:5787
     gluster#7 0x7f5977672555 in gd_stage_op_phase glusterd-syncop.c:1297
     gluster#8 0x7f5977676db0 in gd_sync_task_begin glusterd-syncop.c:1951
     gluster#9 0x7f59776775dc in glusterd_op_begin_synctask glusterd-syncop.c:2016
     gluster#10 0x7f5977642bd6 in __glusterd_handle_create_volume glusterd-volume-ops.c:506
     gluster#11 0x7f59774e27b1 in glusterd_big_locked_handler glusterd-handler.c:83
     gluster#12 0x7f5983b14cac in synctask_wrap syncop.c:353
     gluster#13 0x7f59810240af  (/lib64/libc.so.6+0x240af)
During volume creation, glusterd_is_path_in_use will be called to check brick path reuse.
Under normal circumstances, there is no problem, however, when a `force` cmd is given during
volume creation and the futher sys_lsetxattr failed, the memory area previously pointed by
*op_errstr will be leakd, cause:
out:
    if (strlen(msg))
        *op_errstr = gf_strdup(msg);

Similar leak also exists in posix_cs_set_state:
value = GF_CALLOC(1, xattrsize + 1, gf_posix_mt_char);
...
dict_set_str_sizen(*rsp, GF_CS_OBJECT_REMOTE, value);

Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>

* glusterd: fix memory leaks due to lack of GF_FREE

In glusterd-store.c, there are while loops like:

gf_store_iter_get_next(iter, &key, &value, &op_errno);
while (!ret) {
    if (xx_condition) {
        do_something();
        goto out;
    }
    GF_FREE(key);
    GF_FREE(value);
    key = NULL;
    value = NULL;

    ret = gf_store_iter_get_next(iter, &key, &value, &op_errno);

}
It's ok under normal case, howerver, once the condition does not meet
and the procedure goto 'out', there will be memory leak.

Hence, it is necessary to put a check at 'out'.

Similar leaks will be triggered in glusterd_store_retrieve_peers.
If no peerinfo is found, the procedure will goto the next loop.
It means memory previously allocated for key & value will be
leaked once gf_store_iter_get_next is called again in the next loop.

Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>

---------

Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>
Co-authored-by: chenjinhao <chen.jinhao@zte.com.cn>
chen1195585098 pushed a commit to chen1195585098/glusterfs that referenced this pull request Jun 9, 2025
asan reports:
Direct leak of 4386 byte(s) in 34 object(s) allocated from:
    #0 0x7ff3b72efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
    gluster#1 0x7ff3b6902420 in __gf_malloc (/lib64/libglusterfs.so.0+0x102420)
    gluster#2 0x7ff3b6876e18 in dict_allocate_and_serialize (/lib64/libglusterfs.so.0+0x76e18)
    gluster#3 0x7ff3aa5ec701 in glusterd_mgmt_handshake glusterd-handshake.c:2212
    gluster#4 0x7ff3aa5ee2cc in __glusterd_peer_dump_version_cbk glusterd-handshake.c:2388
    gluster#5 0x7ff3aa5b58ca in glusterd_big_locked_cbk glusterd-rpc-ops.c:217
    gluster#6 0x7ff3b642ffcb in rpc_clnt_handle_reply rpc-clnt.c:780
    gluster#7 0x7ff3b64307e9 in rpc_clnt_notify rpc-clnt.c:957
    gluster#8 0x7ff3b6425c2c in rpc_transport_notify (/lib64/libgfrpc.so.0+0x25c2c)
    gluster#9 0x7ff3a880d874 in socket_event_poll_in_async socket.c:2531
    gluster#10 0x7ff3a882828e in socket_event_poll_in ../../../../libglusterfs/src/glusterfs/async.h:189
    gluster#11 0x7ff3a882828e in socket_event_handler socket.c:2963
    gluster#12 0x7ff3a882828e in socket_event_handler socket.c:2883
    gluster#13 0x7ff3b69af776 in event_dispatch_epoll_handler event-epoll.c:640
    gluster#14 0x7ff3b69af776 in event_dispatch_epoll_worker event-epoll.c:751
    gluster#15 0x7ff3b48082fe in start_thread pthread_create.c:479
    gluster#16 0x7ff3b3e39dd2 in __GI___clone (/lib64/libc.so.6+0x39dd2)

Fixes: gluster#4558
Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>
chen1195585098 pushed a commit to chen1195585098/glusterfs that referenced this pull request Jun 9, 2025
asan reports:
Direct leak of 4386 byte(s) in 34 object(s) allocated from:
    #0 0x7ff3b72efbb8 in __interceptor_malloc (/lib64/libasan.so.5+0xefbb8)
    gluster#1 0x7ff3b6902420 in __gf_malloc (/lib64/libglusterfs.so.0+0x102420)
    gluster#2 0x7ff3b6876e18 in dict_allocate_and_serialize (/lib64/libglusterfs.so.0+0x76e18)
    gluster#3 0x7ff3aa5ec701 in glusterd_mgmt_handshake glusterd-handshake.c:2212
    gluster#4 0x7ff3aa5ee2cc in __glusterd_peer_dump_version_cbk glusterd-handshake.c:2388
    gluster#5 0x7ff3aa5b58ca in glusterd_big_locked_cbk glusterd-rpc-ops.c:217
    gluster#6 0x7ff3b642ffcb in rpc_clnt_handle_reply rpc-clnt.c:780
    gluster#7 0x7ff3b64307e9 in rpc_clnt_notify rpc-clnt.c:957
    gluster#8 0x7ff3b6425c2c in rpc_transport_notify (/lib64/libgfrpc.so.0+0x25c2c)
    gluster#9 0x7ff3a880d874 in socket_event_poll_in_async socket.c:2531
    gluster#10 0x7ff3a882828e in socket_event_poll_in ../../../../libglusterfs/src/glusterfs/async.h:189
    gluster#11 0x7ff3a882828e in socket_event_handler socket.c:2963
    gluster#12 0x7ff3a882828e in socket_event_handler socket.c:2883
    gluster#13 0x7ff3b69af776 in event_dispatch_epoll_handler event-epoll.c:640
    gluster#14 0x7ff3b69af776 in event_dispatch_epoll_worker event-epoll.c:751
    gluster#15 0x7ff3b48082fe in start_thread pthread_create.c:479
    gluster#16 0x7ff3b3e39dd2 in __GI___clone (/lib64/libc.so.6+0x39dd2)

Fixes: gluster#4558
Signed-off-by: chenjinhao <chen.jinhao@zte.com.cn>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants