Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leaks #52

Closed
asgard38 opened this issue Nov 28, 2016 · 5 comments
Closed

Memory leaks #52

asgard38 opened this issue Nov 28, 2016 · 5 comments
Labels
Question wontfix Managed by stale[bot]

Comments

@asgard38
Copy link

libvirtd leaks a lot of memory when using glusterfs driver.
Here some output from valgrind:

==27470== 2,704,894 (272 direct, 2,704,622 indirect) bytes in 1 blocks are definitely lost in loss record 1,926 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB25508: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B5FA904: ???
==27470== by 0x2B60B639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)
==27470== by 0x1CDB7D3E: rpc_clnt_notify (in /usr/lib64/libgfrpc.so.0.0.1)

==27470== 639,120 bytes in 2 blocks are possibly lost in loss record 1,917 of 1,927
==27470== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==27470== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB364E6: mem_pool_new_fn (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB2555A: inode_table_new (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x2B3B3904: ???
==27470== by 0x2B3C4639: ???
==27470== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==27470== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==27470== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)

Version information:
CentOS Linux release 7.2.1511

glusterfs-client-xlators-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-fuse-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-api-3.7.1-16.0.1.el7.centos.x86_64
glusterfs-libs-3.7.1-16.0.1.el7.centos.x86_64
libvirt-1.2.17-13.el7_2.5.x86_64
qemu-kvm-1.5.3-105.el7_2.7.x86_64

@unitednettech
Copy link

Hello,
We made some tests with libvirtd running under valgrind with GlusterFS enabled.
Valgrind reports many memory leaks in GlusterFS driver. Library libglusterfs.so.0.0.1
For example:
==31235== 7,794 (5,712 direct, 2,082 indirect) bytes in 2 blocks are definitely lost in loss record 2,726 of 2,802
==31235== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==31235== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==31235== by 0x1CDB2AB8: rpc_transport_load (in /usr/lib64/libgfrpc.so.0.0.1)
==31235== by 0x1CDB75CF: rpc_clnt_new (in /usr/lib64/libgfrpc.so.0.0.1)
==31235== by 0x29B86E4D: ???
==31235== by 0x29B88022: ???
==31235== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==31235== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==31235== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==31235== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==31235== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==31235== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)

@unitednettech
Copy link

We are talking the same thing !!!

We made some tests with libvirtd running under valgrind with GlusterFS enabled.
Valgrind reports many memory leaks in GlusterFS driver. Library libglusterfs.so.0.0.1
For example:
==31235== 7,794 (5,712 direct, 2,082 indirect) bytes in 2 blocks are definitely lost in loss record 2,726 of 2,802
==31235== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==31235== by 0x1CB35D37: __gf_calloc (in /usr/lib64/libglusterfs.so.0.0.1)
==31235== by 0x1CDB2AB8: rpc_transport_load (in /usr/lib64/libgfrpc.so.0.0.1)
==31235== by 0x1CDB75CF: rpc_clnt_new (in /usr/lib64/libgfrpc.so.0.0.1)
==31235== by 0x29B86E4D: ???
==31235== by 0x29B88022: ???
==31235== by 0x1CB006F6: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==31235== by 0x1CB433F8: glusterfs_graph_init (in /usr/lib64/libglusterfs.so.0.0.1)
==31235== by 0x1CB43D0A: glusterfs_graph_activate (in /usr/lib64/libglusterfs.so.0.0.1)
==31235== by 0x1C8C598C: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==31235== by 0x1C8C5B31: ??? (in /usr/lib64/libgfapi.so.0.0.0)
==31235== by 0x1CDB7A7F: rpc_clnt_handle_reply (in /usr/lib64/libgfrpc.so.0.0.1)

@stale
Copy link

stale bot commented Apr 30, 2020

Thank you for your contributions.
Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity.
It will be closed in 2 weeks if no one responds with a comment here.

@stale stale bot added the wontfix Managed by stale[bot] label Apr 30, 2020
@stale
Copy link

stale bot commented May 15, 2020

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

@stale stale bot closed this as completed May 15, 2020
@stale
Copy link

stale bot commented May 30, 2020

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Question wontfix Managed by stale[bot]
Projects
None yet
Development

No branches or pull requests

3 participants