Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RTCP Memory leak #66

Closed
lcligny opened this issue May 25, 2016 · 54 comments
Closed

RTCP Memory leak #66

lcligny opened this issue May 25, 2016 · 54 comments

Comments

@lcligny
Copy link

lcligny commented May 25, 2016

Hello,

On a system handling about 100 simultaneous calls, the captagent process VIRT, SHR and RES Memory is growing up rapidely, finally eating up all the ressources.

With "socketspcap_rtcp" to enable="false" the memory usage remains stable.

Here is a valgrind --leak-check=yes output for captagent with socketspcap_rtcp to "true" and 100 simultaneous calls during less than one minute:

==19742== HEAP SUMMARY:
==19742== in use at exit: 56,920,889 bytes in 11,972 blocks
==19742== total heap usage: 36,841 allocs, 24,869 frees, 65,420,927 bytes allocated
==19742==
==19742== 14 bytes in 7 blocks are definitely lost in loss record 3 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E379F1: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x6016A86: load_module_xml_config (database_hash.c:366)
==19742== by 0x6016C97: load_module (database_hash.c:411)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 14 bytes in 7 blocks are definitely lost in loss record 4 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E379F1: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x6A1B616: load_module_xml_config (protocol_rtcp.c:145)
==19742== by 0x6A1B821: load_module (protocol_rtcp.c:189)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 16 bytes in 4 blocks are definitely lost in loss record 6 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E37969: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x6A1B616: load_module_xml_config (protocol_rtcp.c:145)
==19742== by 0x6A1B821: load_module (protocol_rtcp.c:189)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 16 bytes in 8 blocks are definitely lost in loss record 7 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E379F1: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x5E0ECE6: load_module_xml_config (protocol_sip.c:481)
==19742== by 0x5E0EEF6: load_module (protocol_sip.c:525)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 19 bytes in 5 blocks are definitely lost in loss record 8 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E37969: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x5E0ECE6: load_module_xml_config (protocol_sip.c:481)
==19742== by 0x5E0EEF6: load_module (protocol_sip.c:525)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 24 bytes in 12 blocks are definitely lost in loss record 9 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E379F1: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x6C1EFA6: load_module_xml_config (socket_rtcpxr.c:294)
==19742== by 0x6C1F1B4: load_module (socket_rtcpxr.c:348)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 24 bytes in 12 blocks are definitely lost in loss record 10 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E379F1: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0xA37B086: load_module_xml_config (socket_raw.c:580)
==19742== by 0xA37B294: load_module (socket_raw.c:634)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 26 bytes in 13 blocks are definitely lost in loss record 11 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E379F1: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x5C08F56: load_module_xml_config (transport_hep.c:915)
==19742== by 0x5C09164: load_module (transport_hep.c:962)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 31 bytes in 9 blocks are definitely lost in loss record 12 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E37969: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x6C1EFA6: load_module_xml_config (socket_rtcpxr.c:294)
==19742== by 0x6C1F1B4: load_module (socket_rtcpxr.c:348)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 32 bytes in 8 blocks are definitely lost in loss record 14 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E37969: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0xA37B086: load_module_xml_config (socket_raw.c:580)
==19742== by 0xA37B294: load_module (socket_raw.c:634)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 34 bytes in 10 blocks are definitely lost in loss record 15 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E37969: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x5C08F56: load_module_xml_config (transport_hep.c:915)
==19742== by 0x5C09164: load_module (transport_hep.c:962)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 44 bytes in 4 blocks are definitely lost in loss record 17 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E37969: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x6016A86: load_module_xml_config (database_hash.c:366)
==19742== by 0x6016C97: load_module (database_hash.c:411)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 60 bytes in 30 blocks are definitely lost in loss record 28 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E379F1: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x4042AF: load_xml_config (captagent.c:337)
==19742== by 0x4039E9: main (captagent.c:303)
==19742==
==19742== 60 bytes in 30 blocks are definitely lost in loss record 29 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E379F1: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x6E24D36: load_module_xml_config (socket_pcap.c:677)
==19742== by 0x6E24F44: load_module (socket_pcap.c:731)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 64 bytes in 1 blocks are definitely lost in loss record 32 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x55323E4: gaih_inet (getaddrinfo.c:1275)
==19742== by 0x5535A11: getaddrinfo (getaddrinfo.c:2441)
==19742== by 0x5C0841C: init_hepsocket_blocking (transport_hep.c:774)
==19742== by 0x5C09668: load_module (transport_hep.c:1124)
==19742== by 0x406543: register_module (modules.c:132)
==19742== by 0x4067DD: register_modules (modules.c:216)
==19742== by 0x403A48: main (captagent.c:324)
==19742==
==19742== 83 bytes in 23 blocks are definitely lost in loss record 34 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E37969: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x6E24D36: load_module_xml_config (socket_pcap.c:677)
==19742== by 0x6E24F44: load_module (socket_pcap.c:731)
==19742== by 0x406543: register_module (modules.c:132)
==19742==
==19742== 123 bytes in 27 blocks are definitely lost in loss record 38 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8AE1: strndup (strndup.c:46)
==19742== by 0x406885: xml_charhndl (xmlread.c:123)
==19742== by 0x4E37969: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3884D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3A36D: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3AB1A: ??? (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x4E3CB5C: XML_ParseBuffer (in /lib/x86_64-linux-gnu/libexpat.so.1.6.0)
==19742== by 0x406BC5: xml_parse (xmlread.c:167)
==19742== by 0x4042AF: load_xml_config (captagent.c:337)
==19742== by 0x4039E9: main (captagent.c:303)
==19742==
==19742== 135 bytes in 15 blocks are definitely lost in loss record 39 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x54F8A81: strdup (strdup.c:43)
==19742== by 0x407404: addstr.constprop.1 (capplan.l:246)
==19742== by 0x407D88: yylex (capplan.l:214)
==19742== by 0x409140: yyparse (capplan.tab.c:1455)
==19742== by 0x6E252BE: load_module (socket_pcap.c:902)
==19742== by 0x406543: register_module (modules.c:132)
==19742== by 0x4067DD: register_modules (modules.c:216)
==19742== by 0x403A48: main (captagent.c:324)
==19742==
==19742== 216 bytes in 1 blocks are definitely lost in loss record 48 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x4046A42: ??? (in /usr/lib/x86_64-linux-gnu/libpcap.so.1.3.0)
==19742== by 0x403B22B: pcap_compile (in /usr/lib/x86_64-linux-gnu/libpcap.so.1.3.0)
==19742== by 0x6E2496D: init_socket (socket_pcap.c:535)
==19742== by 0x6E25217: load_module (socket_pcap.c:872)
==19742== by 0x406543: register_module (modules.c:132)
==19742== by 0x4067DD: register_modules (modules.c:216)
==19742== by 0x403A48: main (captagent.c:324)
==19742==
==19742== 272 bytes in 1 blocks are possibly lost in loss record 50 of 73
==19742== at 0x4C272B8: calloc (vg_replace_malloc.c:566)
==19742== by 0x401125E: _dl_allocate_tls (dl-tls.c:297)
==19742== by 0x52644ED: pthread_create@@GLIBC_2.2.5 (allocatestack.c:585)
==19742== by 0x601705A: timer_init (captarray.c:50)
==19742== by 0x6016ED2: load_module (database_hash.c:499)
==19742== by 0x406543: register_module (modules.c:132)
==19742== by 0x4067DD: register_modules (modules.c:216)
==19742== by 0x403A48: main (captagent.c:324)
==19742==
==19742== 1,520 bytes in 7 blocks are definitely lost in loss record 67 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x4064DB: register_module (modules.c:119)
==19742== by 0x4067DD: register_modules (modules.c:216)
==19742== by 0x403A48: main (captagent.c:324)
==19742==
==19742== 5,000 bytes in 1 blocks are possibly lost in loss record 69 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x6A1B529: w_parse_rtcp_to_json (protocol_rtcp.c:87)
==19742== by 0x404B4E: run_actions (conf_function.c:233)
==19742== by 0x404D98: eval_expr (conf_function.c:140)
==19742== by 0x404ED0: do_action (conf_function.c:88)
==19742== by 0x404B4E: run_actions (conf_function.c:233)
==19742== by 0x404B4E: run_actions (conf_function.c:233)
==19742== by 0x404B4E: run_actions (conf_function.c:233)
==19742== by 0x6E241C3: callback_proto (socket_pcap.c:436)
==19742== by 0x4031FBD: ??? (in /usr/lib/x86_64-linux-gnu/libpcap.so.1.3.0)
==19742== by 0x4038F00: pcap_loop (in /usr/lib/x86_64-linux-gnu/libpcap.so.1.3.0)
==19742== by 0x6E23B38: proto_collect (socket_pcap.c:642)
==19742==
==19742== 56,690,000 bytes in 11,338 blocks are definitely lost in loss record 73 of 73
==19742== at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==19742== by 0x6A1B529: w_parse_rtcp_to_json (protocol_rtcp.c:87)
==19742== by 0x404B4E: run_actions (conf_function.c:233)
==19742== by 0x404D98: eval_expr (conf_function.c:140)
==19742== by 0x404ED0: do_action (conf_function.c:88)
==19742== by 0x404B4E: run_actions (conf_function.c:233)
==19742== by 0x404B4E: run_actions (conf_function.c:233)
==19742== by 0x404B4E: run_actions (conf_function.c:233)
==19742== by 0x6E241C3: callback_proto (socket_pcap.c:436)
==19742== by 0x4031FBD: ??? (in /usr/lib/x86_64-linux-gnu/libpcap.so.1.3.0)
==19742== by 0x4038F00: pcap_loop (in /usr/lib/x86_64-linux-gnu/libpcap.so.1.3.0)
==19742== by 0x6E23B38: proto_collect (socket_pcap.c:642)
==19742==
==19742== LEAK SUMMARY:
==19742== definitely lost: 56,692,555 bytes in 11,571 blocks
==19742== indirectly lost: 0 bytes in 0 blocks
==19742== possibly lost: 5,272 bytes in 2 blocks
==19742== still reachable: 223,062 bytes in 399 blocks
==19742== suppressed: 0 bytes in 0 blocks
==19742== Reachable blocks (those to which a pointer was found) are not shown.
==19742== To see them, rerun with: --leak-check=full --show-reachable=yes
==19742==
==19742== For counts of detected and suppressed errors, rerun with: -v
==19742== Use --track-origins=yes to see where uninitialised values come from
==19742== ERROR SUMMARY: 29 errors from 26 contexts (suppressed: 25 from 7)

@adubovikov
Copy link
Member

can you please do --leak-check=full -show-reachable=yes to see the full picture ?
because so far i see only non released xml_node, but this not effect to memory leak at all

@lcligny
Copy link
Author

lcligny commented May 26, 2016

Here is the valgrind --leak-check=full -show-reachable=yes output.
Valgrind_full_captagent_with_RTCP.txt

@killdashnine
Copy link

Seeing the same here, multiple servers with call loads over > 200 with RTCP enabled filling up 8G of memory and swap.

@adubovikov
Copy link
Member

guys, can you please re-test with the last git version ?
thank you

@lcligny
Copy link
Author

lcligny commented May 27, 2016

It's running since 5 minutes on two servers handling 80 SIP calls each, and remains stable for me. If it's ok for you, I will run it a little more before closing the issue.
Thanks again for your support.

@adubovikov
Copy link
Member

yes, please run it a bit longer and we will close the issue :-)

thank you!

@killdashnine
Copy link

Will run full production test as well over 12 nodes running RTCP. But will do so on Monday (as I don't want to test it over the weekend :) )

@killdashnine
Copy link

Looks like the majority of the issue has been resolved. However on a few hosts I still see RES increasing with 4 bytes every 1-2 seconds and I see the call volume is actually decreasing. Will keep monitoring.

@adubovikov
Copy link
Member

thanks for update. Please keep it run 2-3 days and see if memory goes up.

thank you

Wbr,
Alexandr

On 30 May 2016 at 12:01, Matthias van der Vlies notifications@github.com
wrote:

Looks like the majority of the issue has been resolved. However on a few
hosts I still see RES increasing with 4 bytes every 1-2 seconds and I see
the call volume is actually decreasing. Will keep monitoring.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJXWC78BdIK9gsaVJWplHv4a1Gr0Nks5qGrVugaJpZM4Impfv
.

@lcligny
Copy link
Author

lcligny commented May 30, 2016

I also see memory growing a little bit, but not as fast as before. After a 2 days run with calls, RES growed from 188M to 340M as I write. I didn't run a valgrind on the new code but if there's still some "definitely lost" shouldn't we try to address it ? I can provide valgrind output as needed ;) I'll keep monitoring too.

@adubovikov
Copy link
Member

just pushed some patches to avoid potentials memory leaks

after patches, I have only two points, but these can be ignored, since it
called only once.

can you please recompile and check on your host ?

==3875== HEAP SUMMARY:
==3875== in use at exit: 39,125 bytes in 221 blocks
==3875== total heap usage: 1,247 allocs, 1,026 frees, 327,082 bytes
allocated
==3875==
==3875== 154 bytes in 17 blocks are definitely lost in loss record 23 of 50
==3875== at 0x4C28C20: malloc (vg_replace_malloc.c:296)
==3875== by 0x5740989: strdup (strdup.c:42)
==3875== by 0x406964: addstr.constprop.1 (capplan.l:245)
==3875== by 0x407777: yylex (capplan.l:214)
==3875== by 0x40857F: yyparse (capplan.tab.c:1286)
==3875== by 0x78B5007: load_module (socket_pcap.c:905)
==3875== by 0x405C09: register_module (modules.c:132)
==3875== by 0x405E9D: register_modules (modules.c:220)
==3875== by 0x403148: main (captagent.c:324)
==3875==
==3875== 272 bytes in 1 blocks are possibly lost in loss record 31 of 50
==3875== at 0x4C2AD10: calloc (vg_replace_malloc.c:623)
==3875== by 0x4010F91: allocate_dtv (dl-tls.c:296)
==3875== by 0x401169D: _dl_allocate_tls (dl-tls.c:460)
==3875== by 0x54AAC27: allocate_stack (allocatestack.c:589)
==3875== by 0x54AAC27: pthread_create@@GLIBC_2.2.5 (pthread_create.c:495)
==3875== by 0x6AA6E1A: timer_init (captarray.c:50)
==3875== by 0x6AA6CB2: load_module (database_hash.c:500)
==3875== by 0x405C09: register_module (modules.c:132)
==3875== by 0x405E9D: register_modules (modules.c:220)
==3875== by 0x403148: main (captagent.c:324)
==3875==
==3875== LEAK SUMMARY:
==3875== definitely lost: 154 bytes in 17 blocks
==3875== indirectly lost: 0 bytes in 0 blocks
==3875== possibly lost: 272 bytes in 1 blocks
==3875== still reachable: 38,699 bytes in 203 blocks
==3875== suppressed: 0 bytes in 0 blocks
==3875== Reachable blocks (those to which a pointer was found) are not
shown.
==3875== To see them, rerun with: --leak-check=full --show-leak-kinds=all

On 30 May 2016 at 12:14, lcligny notifications@github.com wrote:

I also see memory growing a little bit, but not as fast as before. After a
2 days run with calls, RES growed from 188M to 484M as I write. I didn't
run a valgrind on the new code but if there's still some "definitely lost"
shouldn't we try to address it ? I can provide valgrind output as needed ;)
I'll keep monitoring too.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJW1_Oy3W29fgERtr0KPUk3jYHuwJks5qGrhpgaJpZM4Impfv
.

@killdashnine
Copy link

killdashnine commented May 30, 2016

lcligny are you using centos/rhel? I still see some leaking on el6 and
el7 machines, but not on debian.

edit: on debian too, but a lot lot less

On 05/30/2016 12:14 PM, lcligny wrote:

I also see memory growing a little bit, but not as fast as before.
After a 2 days run with calls, RES growed from 188M to 484M as I
write. I didn't run a valgrind on the new code but if there's still
some "definitely lost" shouldn't we try to address it ? I can provide
valgrind output as needed ;) I'll keep monitoring too.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AApMqSIU0Z70DFb0bNs3uxtofeWcXePSks5qGrhrgaJpZM4Impfv.

@adubovikov
Copy link
Member

Matthias, do you use the last git code ?

On 30 May 2016 at 14:18, Matthias van der Vlies notifications@github.com
wrote:

lcligny are you using centos/rhel? I still see some leaking on el6 and
el7 machines, but not on debian.

On 05/30/2016 12:14 PM, lcligny wrote:

I also see memory growing a little bit, but not as fast as before.
After a 2 days run with calls, RES growed from 188M to 484M as I
write. I didn't run a valgrind on the new code but if there's still
some "definitely lost" shouldn't we try to address it ? I can provide
valgrind output as needed ;) I'll keep monitoring too.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<
https://github.com/sipcapture/captagent/issues/66#issuecomment-222460644>,
or mute the thread
<
https://github.com/notifications/unsubscribe/AApMqSIU0Z70DFb0bNs3uxtofeWcXePSks5qGrhrgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJQH_l_zts1CLNK23bowUqo832kMdks5qGtWPgaJpZM4Impfv
.

@killdashnine
Copy link

Yes:

git log -1

commit bbc7e6c
Author: Alexandr Dubovikov <************>
Date: Mon May 30 12:43:22 2016 +0200

fixed some minor memory leaks

On 05/30/2016 02:26 PM, Alexandr Dubovikov wrote:

Matthias, do you use the last git code ?

On 30 May 2016 at 14:18, Matthias van der Vlies notifications@github.com
wrote:

lcligny are you using centos/rhel? I still see some leaking on el6 and
el7 machines, but not on debian.

On 05/30/2016 12:14 PM, lcligny wrote:

I also see memory growing a little bit, but not as fast as before.
After a 2 days run with calls, RES growed from 188M to 484M as I
write. I didn't run a valgrind on the new code but if there's still
some "definitely lost" shouldn't we try to address it ? I can provide
valgrind output as needed ;) I'll keep monitoring too.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

https://github.com/sipcapture/captagent/issues/66#issuecomment-222460644>,

or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqSIU0Z70DFb0bNs3uxtofeWcXePSks5qGrhrgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

#66 (comment),
or mute the thread

https://github.com/notifications/unsubscribe/AETdJQH_l_zts1CLNK23bowUqo832kMdks5qGtWPgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AApMqS7JWyAt0O1poZQb8Z16VmE2JMGmks5qGteJgaJpZM4Impfv.

@adubovikov
Copy link
Member

can you please run valgrind on one of your box ?

On 30 May 2016 at 14:28, Matthias van der Vlies notifications@github.com
wrote:

Yes:

git log -1

commit bbc7e6c
Author: Alexandr Dubovikov <************>
Date: Mon May 30 12:43:22 2016 +0200

fixed some minor memory leaks

On 05/30/2016 02:26 PM, Alexandr Dubovikov wrote:

Matthias, do you use the last git code ?

On 30 May 2016 at 14:18, Matthias van der Vlies <
notifications@github.com>
wrote:

lcligny are you using centos/rhel? I still see some leaking on el6 and
el7 machines, but not on debian.

On 05/30/2016 12:14 PM, lcligny wrote:

I also see memory growing a little bit, but not as fast as before.
After a 2 days run with calls, RES growed from 188M to 484M as I
write. I didn't run a valgrind on the new code but if there's still
some "definitely lost" shouldn't we try to address it ? I can provide
valgrind output as needed ;) I'll keep monitoring too.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

#66 (comment)
,

or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqSIU0Z70DFb0bNs3uxtofeWcXePSks5qGrhrgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<
https://github.com/sipcapture/captagent/issues/66#issuecomment-222480328>,
or mute the thread

<
https://github.com/notifications/unsubscribe/AETdJQH_l_zts1CLNK23bowUqo832kMdks5qGtWPgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<
https://github.com/sipcapture/captagent/issues/66#issuecomment-222481746>,
or mute the thread
<
https://github.com/notifications/unsubscribe/AApMqS7JWyAt0O1poZQb8Z16VmE2JMGmks5qGteJgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJVey6KwkvSwyT1PHul5oXcRCAr3Lks5qGtf0gaJpZM4Impfv
.

@lcligny
Copy link
Author

lcligny commented May 30, 2016

For the record, i'm running vanilla Debian 7 with asterisk 11 on those box.
I just recompiled captagent from last git code. As for the last code, it will need to run for some hours to see if the memory consumption remains ok, but for now it works.

@killdashnine
Copy link

Correct me if I'm wrong as I just had a quick look, but I can't find any
reference to clear_ipport being called in database_hash.

As you can see most of the memory is used in database_hash:

==7617== 28,840 bytes in 103 blocks are still reachable in loss record
51 of 52
==7617== at 0x4C27A2E: malloc (vg_replace_malloc.c:270)
==7617== by 0x625CE92: add_timer (captarray.c:57)
==7617== by 0x625CD18: w_check_rtcp_ipport (database_hash.c:115)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404D5B: eval_expr (conf_function.c:140)
==7617== by 0x404CFB: eval_expr (conf_function.c:186)
==7617== by 0x404EDD: do_action (conf_function.c:88)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x726AF9F: callback_proto (socket_pcap.c:436)
==7617==
==7617== 800,976 bytes in 814 blocks are still reachable in loss record
52 of 52
==7617== at 0x4C27A2E: malloc (vg_replace_malloc.c:270)
==7617== by 0x625C5CD: add_ipport (database_hash.c:153)
==7617== by 0x625CD23: w_check_rtcp_ipport (database_hash.c:116)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404D5B: eval_expr (conf_function.c:140)
==7617== by 0x404CFB: eval_expr (conf_function.c:186)
==7617== by 0x404EDD: do_action (conf_function.c:88)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x726AF9F: callback_proto (socket_pcap.c:436)
==7617==
==7617== LEAK SUMMARY:
==7617== definitely lost: 838 bytes in 63 blocks
==7617== indirectly lost: 0 bytes in 0 blocks
==7617== possibly lost: 560 bytes in 1 blocks
==7617== still reachable: 868,755 bytes in 1,116 blocks
==7617== suppressed: 0 bytes in 0 blocks
==7617==

However:

struct ipport_items *ipports

Still references of course the call, so it's still referenced, but if
nothing is cleared from it then it will just grow and grow.

I can find a reference to add_ipport, but none to clear_ipport:

captagent/mod/proto_uni/proto_uni.c:
add_ipport(ipptmp, &psip.callid);

I do see a timer is set there, but in the timer code it's not calling
clear_ipport.

Any thoughts?

On 05/30/2016 02:30 PM, Alexandr Dubovikov wrote:

can you please run valgrind on one of your box ?

On 30 May 2016 at 14:28, Matthias van der Vlies notifications@github.com
wrote:

Yes:

git log -1

commit bbc7e6c
Author: Alexandr Dubovikov <************>
Date: Mon May 30 12:43:22 2016 +0200

fixed some minor memory leaks

On 05/30/2016 02:26 PM, Alexandr Dubovikov wrote:

Matthias, do you use the last git code ?

On 30 May 2016 at 14:18, Matthias van der Vlies <
notifications@github.com>
wrote:

lcligny are you using centos/rhel? I still see some leaking on
el6 and
el7 machines, but not on debian.

On 05/30/2016 12:14 PM, lcligny wrote:

I also see memory growing a little bit, but not as fast as before.
After a 2 days run with calls, RES growed from 188M to 484M as I
write. I didn't run a valgrind on the new code but if there's
still
some "definitely lost" shouldn't we try to address it ? I can
provide
valgrind output as needed ;) I'll keep monitoring too.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

#66 (comment)
,

or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqSIU0Z70DFb0bNs3uxtofeWcXePSks5qGrhrgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<

https://github.com/sipcapture/captagent/issues/66#issuecomment-222480328>,

or mute the thread

<

https://github.com/notifications/unsubscribe/AETdJQH_l_zts1CLNK23bowUqo832kMdks5qGtWPgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

https://github.com/sipcapture/captagent/issues/66#issuecomment-222481746>,

or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqS7JWyAt0O1poZQb8Z16VmE2JMGmks5qGteJgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

#66 (comment),
or mute the thread

https://github.com/notifications/unsubscribe/AETdJVey6KwkvSwyT1PHul5oXcRCAr3Lks5qGtf0gaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AApMqbN2xDARgzaieEUT7-77XaBILqEuks5qGtiBgaJpZM4Impfv.

@adubovikov
Copy link
Member

the first part will be release during timer check:

https://github.com/sipcapture/captagent/blob/master/src/modules/database/hash/captarray.c#L97

so, here just be all good.

the second part definitive has an issue... checking.

Wbr,
Alexandr

On 30 May 2016 at 15:03, Matthias van der Vlies notifications@github.com
wrote:

Correct me if I'm wrong as I just had a quick look, but I can't find any
reference to clear_ipport being called in database_hash.

As you can see most of the memory is used in database_hash:

==7617== 28,840 bytes in 103 blocks are still reachable in loss record
51 of 52
==7617== at 0x4C27A2E: malloc (vg_replace_malloc.c:270)
==7617== by 0x625CE92: add_timer (captarray.c:57)
==7617== by 0x625CD18: w_check_rtcp_ipport (database_hash.c:115)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404D5B: eval_expr (conf_function.c:140)
==7617== by 0x404CFB: eval_expr (conf_function.c:186)
==7617== by 0x404EDD: do_action (conf_function.c:88)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x726AF9F: callback_proto (socket_pcap.c:436)
==7617==
==7617== 800,976 bytes in 814 blocks are still reachable in loss record
52 of 52
==7617== at 0x4C27A2E: malloc (vg_replace_malloc.c:270)
==7617== by 0x625C5CD: add_ipport (database_hash.c:153)
==7617== by 0x625CD23: w_check_rtcp_ipport (database_hash.c:116)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404D5B: eval_expr (conf_function.c:140)
==7617== by 0x404CFB: eval_expr (conf_function.c:186)
==7617== by 0x404EDD: do_action (conf_function.c:88)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x726AF9F: callback_proto (socket_pcap.c:436)
==7617==
==7617== LEAK SUMMARY:
==7617== definitely lost: 838 bytes in 63 blocks
==7617== indirectly lost: 0 bytes in 0 blocks
==7617== possibly lost: 560 bytes in 1 blocks
==7617== still reachable: 868,755 bytes in 1,116 blocks
==7617== suppressed: 0 bytes in 0 blocks
==7617==

However:

struct ipport_items *ipports

Still references of course the call, so it's still referenced, but if
nothing is cleared from it then it will just grow and grow.

I can find a reference to add_ipport, but none to clear_ipport:

captagent/mod/proto_uni/proto_uni.c:
add_ipport(ipptmp, &psip.callid);

I do see a timer is set there, but in the timer code it's not calling
clear_ipport.

Any thoughts?

On 05/30/2016 02:30 PM, Alexandr Dubovikov wrote:

can you please run valgrind on one of your box ?

On 30 May 2016 at 14:28, Matthias van der Vlies <
notifications@github.com>
wrote:

Yes:

git log -1

commit bbc7e6c
Author: Alexandr Dubovikov <************>
Date: Mon May 30 12:43:22 2016 +0200

fixed some minor memory leaks

On 05/30/2016 02:26 PM, Alexandr Dubovikov wrote:

Matthias, do you use the last git code ?

On 30 May 2016 at 14:18, Matthias van der Vlies <
notifications@github.com>
wrote:

lcligny are you using centos/rhel? I still see some leaking on
el6 and
el7 machines, but not on debian.

On 05/30/2016 12:14 PM, lcligny wrote:

I also see memory growing a little bit, but not as fast as
before.
After a 2 days run with calls, RES growed from 188M to 484M as I
write. I didn't run a valgrind on the new code but if there's
still
some "definitely lost" shouldn't we try to address it ? I can
provide
valgrind output as needed ;) I'll keep monitoring too.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

#66 (comment)
,

or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqSIU0Z70DFb0bNs3uxtofeWcXePSks5qGrhrgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<

#66 (comment)
,

or mute the thread

<

https://github.com/notifications/unsubscribe/AETdJQH_l_zts1CLNK23bowUqo832kMdks5qGtWPgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

#66 (comment)
,

or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqS7JWyAt0O1poZQb8Z16VmE2JMGmks5qGteJgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<
https://github.com/sipcapture/captagent/issues/66#issuecomment-222482032>,
or mute the thread

<
https://github.com/notifications/unsubscribe/AETdJVey6KwkvSwyT1PHul5oXcRCAr3Lks5qGtf0gaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<
https://github.com/sipcapture/captagent/issues/66#issuecomment-222482417>,
or mute the thread
<
https://github.com/notifications/unsubscribe/AApMqbN2xDARgzaieEUT7-77XaBILqEuks5qGtiBgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJRW82GOZ8nEggTrxUQF7VzAKybUSks5qGuA0gaJpZM4Impfv
.

@adubovikov
Copy link
Member

just checked one more time: the second part doesn't have any issue, because
we clean up the hash on timer check_ipport:

https://github.com/sipcapture/captagent/blob/master/src/modules/database/hash/database_hash.c#L287

https://github.com/sipcapture/captagent/blob/master/src/modules/database/hash/database_hash.c#L306

https://github.com/sipcapture/captagent/blob/master/src/modules/database/hash/database_hash.c#L309

I have just checked the expire_hash_rtcp value to rtcp_timeout and you can
setup this value individually...

Wbr,
Alexandr

On 30 May 2016 at 16:11, Alexandr Dubovikov alexandr.dubovikov@gmail.com
wrote:

the first part will be release during timer check:

https://github.com/sipcapture/captagent/blob/master/src/modules/database/hash/captarray.c#L97

so, here just be all good.

the second part definitive has an issue... checking.

Wbr,
Alexandr

On 30 May 2016 at 15:03, Matthias van der Vlies notifications@github.com
wrote:

Correct me if I'm wrong as I just had a quick look, but I can't find any
reference to clear_ipport being called in database_hash.

As you can see most of the memory is used in database_hash:

==7617== 28,840 bytes in 103 blocks are still reachable in loss record
51 of 52
==7617== at 0x4C27A2E: malloc (vg_replace_malloc.c:270)
==7617== by 0x625CE92: add_timer (captarray.c:57)
==7617== by 0x625CD18: w_check_rtcp_ipport (database_hash.c:115)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404D5B: eval_expr (conf_function.c:140)
==7617== by 0x404CFB: eval_expr (conf_function.c:186)
==7617== by 0x404EDD: do_action (conf_function.c:88)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x726AF9F: callback_proto (socket_pcap.c:436)
==7617==
==7617== 800,976 bytes in 814 blocks are still reachable in loss record
52 of 52
==7617== at 0x4C27A2E: malloc (vg_replace_malloc.c:270)
==7617== by 0x625C5CD: add_ipport (database_hash.c:153)
==7617== by 0x625CD23: w_check_rtcp_ipport (database_hash.c:116)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404D5B: eval_expr (conf_function.c:140)
==7617== by 0x404CFB: eval_expr (conf_function.c:186)
==7617== by 0x404EDD: do_action (conf_function.c:88)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x404B36: run_actions (conf_function.c:233)
==7617== by 0x726AF9F: callback_proto (socket_pcap.c:436)
==7617==
==7617== LEAK SUMMARY:
==7617== definitely lost: 838 bytes in 63 blocks
==7617== indirectly lost: 0 bytes in 0 blocks
==7617== possibly lost: 560 bytes in 1 blocks
==7617== still reachable: 868,755 bytes in 1,116 blocks
==7617== suppressed: 0 bytes in 0 blocks
==7617==

However:

struct ipport_items *ipports

Still references of course the call, so it's still referenced, but if
nothing is cleared from it then it will just grow and grow.

I can find a reference to add_ipport, but none to clear_ipport:

captagent/mod/proto_uni/proto_uni.c:
add_ipport(ipptmp, &psip.callid);

I do see a timer is set there, but in the timer code it's not calling
clear_ipport.

Any thoughts?

On 05/30/2016 02:30 PM, Alexandr Dubovikov wrote:

can you please run valgrind on one of your box ?

On 30 May 2016 at 14:28, Matthias van der Vlies <
notifications@github.com>
wrote:

Yes:

git log -1

commit bbc7e6c
Author: Alexandr Dubovikov <************>
Date: Mon May 30 12:43:22 2016 +0200

fixed some minor memory leaks

On 05/30/2016 02:26 PM, Alexandr Dubovikov wrote:

Matthias, do you use the last git code ?

On 30 May 2016 at 14:18, Matthias van der Vlies <
notifications@github.com>
wrote:

lcligny are you using centos/rhel? I still see some leaking on
el6 and
el7 machines, but not on debian.

On 05/30/2016 12:14 PM, lcligny wrote:

I also see memory growing a little bit, but not as fast as
before.
After a 2 days run with calls, RES growed from 188M to 484M as I
write. I didn't run a valgrind on the new code but if there's
still
some "definitely lost" shouldn't we try to address it ? I can
provide
valgrind output as needed ;) I'll keep monitoring too.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

#66 (comment)

,

or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqSIU0Z70DFb0bNs3uxtofeWcXePSks5qGrhrgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<

#66 (comment)
,

or mute the thread

<

https://github.com/notifications/unsubscribe/AETdJQH_l_zts1CLNK23bowUqo832kMdks5qGtWPgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

#66 (comment)
,

or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqS7JWyAt0O1poZQb8Z16VmE2JMGmks5qGteJgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<
#66 (comment)
,
or mute the thread

<
https://github.com/notifications/unsubscribe/AETdJVey6KwkvSwyT1PHul5oXcRCAr3Lks5qGtf0gaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<
#66 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe/AApMqbN2xDARgzaieEUT7-77XaBILqEuks5qGtiBgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJRW82GOZ8nEggTrxUQF7VzAKybUSks5qGuA0gaJpZM4Impfv
.

@killdashnine
Copy link

Ok, RES is now 616M on one of the machines, 260 on another. I will try latest git version with your rtcp-timeout changes

@lcligny
Copy link
Author

lcligny commented May 31, 2016

RES are 187M and 203M after a non-stop 24h run, so for my setup and workload, the original issue is solved. I will let killdashnine continue his testing.

@adubovikov
Copy link
Member

thanks for update. Can you please let us know if RES will grow up or will
be in same range.

Wbr,
Alexandr

On 31 May 2016 at 10:01, lcligny notifications@github.com wrote:

RES are 187M and 203M after a non-stop 24h run, so for my setup and
workload, the original issue is solved. I will let killdashnine continue
his testing.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJc7nsnIKp7bfk0TYB91LivSP5GA0ks5qG-rxgaJpZM4Impfv
.

@lcligny
Copy link
Author

lcligny commented Jun 1, 2016

276M and 282M RES today, so about 100M more than yesterday at the same time. I'm wondering if it will continue to grow steadily. In such case it would be difficult to just run and forget.

@killdashnine
Copy link

Well on at least 3 machines captagent crashed after last git pull (I
need to check if I can find any cause)

But, on one of the host I started with 38M RES and it's now 4G.

On 06/01/2016 08:47 AM, lcligny wrote:

276M and 282M RES today, so about 100M more than yesterday at the same
time. I'm wondering if it will continue to grow steadily. In such case
it would be difficult to just run and forget.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AApMqbnYcKiH7z6EfZ6GWe-gp562d54bks5qHSrugaJpZM4Impfv.

@adubovikov
Copy link
Member

I have updated uthash. Can you please re-check again ?

I will prepare redis version just to be sure that problem is really in the
hash table and not in the another modules.

thanks for you help!

Wbr,
Alexandr

On 1 June 2016 at 08:47, lcligny notifications@github.com wrote:

276M and 282M RES today, so about 100M more than yesterday at the same
time. I'm wondering if it will continue to grow steadily. In such case it
would be difficult to just run and forget.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJb2Wf4HSUQqJpS3w-BvsFFbv9NKlks5qHSrtgaJpZM4Impfv
.

@killdashnine
Copy link

Ok, I have updated all the machines now. I think the processes got killed because they used too much memory. Starting now around 36-39M RES on all machines and will leave it running for 24h unless I see big increase during the day.

@lcligny
Copy link
Author

lcligny commented Jun 1, 2016

I have updated too, to see if it helps. Starting at 108M RES (I listen on 3 interfaces that's why I start at higher used mem).

@killdashnine
Copy link

Small update, grown from 39 to 65, and captagent is using 100% CPU (I have seen this starting to happen after bc9a735)

@adubovikov
Copy link
Member

looks like I have found the memory leak. Can you please compile the last
git ?

On 1 June 2016 at 13:25, Matthias van der Vlies notifications@github.com
wrote:

Small update, grown from 39 to 65, and captagent is using 100% CPU (I have
seen this starting to happen after bc9a735
bc9a735
)


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJaf9iuGaZcbzhi3H5n0LQ0tol76Nks5qHWw_gaJpZM4Impfv
.

@killdashnine
Copy link

killdashnine commented Jun 1, 2016

Memory is looking good not seeing the 4Kbytes per 2 second, but cpu is 100%

edit: spoken too soon, increasing rapidly now (4kB/s)

@adubovikov
Copy link
Member

I am not sure about cpu.... there is nothing changed that can impact such
cpu usage. Did you change rtcp_timeout value in the database_hash.xml ?

On 1 June 2016 at 13:57, Matthias van der Vlies notifications@github.com
wrote:

Memory is looking good not seeing the 4Kbytes per 2 second, but cpu is 100%


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJQM9Bf5f069NaxbPhlwLpEQh3e39ks5qHXOZgaJpZM4Impfv
.

@killdashnine
Copy link

I have not, using the default files, also see my update. Now it's
leaking again 4kB/s.

On 06/01/2016 02:04 PM, Alexandr Dubovikov wrote:

I am not sure about cpu.... there is nothing changed that can impact such
cpu usage. Did you change rtcp_timeout value in the database_hash.xml ?

On 1 June 2016 at 13:57, Matthias van der Vlies notifications@github.com
wrote:

Memory is looking good not seeing the 4Kbytes per 2 second, but cpu
is 100%


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

#66 (comment),
or mute the thread

https://github.com/notifications/unsubscribe/AETdJQM9Bf5f069NaxbPhlwLpEQh3e39ks5qHXOZgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AApMqdELTV_-dtYV1kx-OBGzTA8u5GRvks5qHXVGgaJpZM4Impfv.

@adubovikov
Copy link
Member

just wait, this is memory hash grows up, at some moment it will stop.

about CPU, can you please confirm that version before bc9a735
bc9a735
works
without 100% CPU usage ?

On 1 June 2016 at 14:06, Matthias van der Vlies notifications@github.com
wrote:

I have not, using the default files, also see my update. Now it's
leaking again 4kB/s.

On 06/01/2016 02:04 PM, Alexandr Dubovikov wrote:

I am not sure about cpu.... there is nothing changed that can impact such
cpu usage. Did you change rtcp_timeout value in the database_hash.xml ?

On 1 June 2016 at 13:57, Matthias van der Vlies <
notifications@github.com>
wrote:

Memory is looking good not seeing the 4Kbytes per 2 second, but cpu
is 100%


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<
https://github.com/sipcapture/captagent/issues/66#issuecomment-222970865>,
or mute the thread

<
https://github.com/notifications/unsubscribe/AETdJQM9Bf5f069NaxbPhlwLpEQh3e39ks5qHXOZgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<
https://github.com/sipcapture/captagent/issues/66#issuecomment-222972275>,
or mute the thread
<
https://github.com/notifications/unsubscribe/AApMqdELTV_-dtYV1kx-OBGzTA8u5GRvks5qHXVGgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJTzczv8G1tPDK-rrRY2V3GO5ncveks5qHXWqgaJpZM4Impfv
.

@adubovikov
Copy link
Member

any progress ? ;-)

On 1 June 2016 at 14:10, Alexandr Dubovikov alexandr.dubovikov@gmail.com
wrote:

just wait, this is memory hash grows up, at some moment it will stop.

about CPU, can you please confirm that version before bc9a735
bc9a735 works
without 100% CPU usage ?

On 1 June 2016 at 14:06, Matthias van der Vlies notifications@github.com
wrote:

I have not, using the default files, also see my update. Now it's
leaking again 4kB/s.

On 06/01/2016 02:04 PM, Alexandr Dubovikov wrote:

I am not sure about cpu.... there is nothing changed that can impact
such
cpu usage. Did you change rtcp_timeout value in the database_hash.xml ?

On 1 June 2016 at 13:57, Matthias van der Vlies <
notifications@github.com>
wrote:

Memory is looking good not seeing the 4Kbytes per 2 second, but cpu
is 100%


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<
#66 (comment)
,
or mute the thread

<
https://github.com/notifications/unsubscribe/AETdJQM9Bf5f069NaxbPhlwLpEQh3e39ks5qHXOZgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<
#66 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe/AApMqdELTV_-dtYV1kx-OBGzTA8u5GRvks5qHXVGgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJTzczv8G1tPDK-rrRY2V3GO5ncveks5qHXWqgaJpZM4Impfv
.

@killdashnine
Copy link

Memory now on 51M (started at 37M) but there are 95 active calls on this
node now, so I expect some memory usage there. Let's wait for 24H test.

Anyway I didn't have the chance yet to check that revision. There are
two of 3 threads running at 100% CPU and one of them is in Read state
all the time.

This is so far only on 3 specific nodes running centos 6. Debian node is
fine, and other centos 6 nodes are too. Centos 7 nodes are also ok.
Captagent config is the same on all machines, but the 3 100% nodes do
have a lot of traffic on them.

Will update you tomorrow.

On 06/01/2016 04:14 PM, Alexandr Dubovikov wrote:

any progress ? ;-)

On 1 June 2016 at 14:10, Alexandr Dubovikov alexandr.dubovikov@gmail.com
wrote:

just wait, this is memory hash grows up, at some moment it will stop.

about CPU, can you please confirm that version before bc9a735

bc9a735
works
without 100% CPU usage ?

On 1 June 2016 at 14:06, Matthias van der Vlies
notifications@github.com
wrote:

I have not, using the default files, also see my update. Now it's
leaking again 4kB/s.

On 06/01/2016 02:04 PM, Alexandr Dubovikov wrote:

I am not sure about cpu.... there is nothing changed that can impact
such
cpu usage. Did you change rtcp_timeout value in the
database_hash.xml ?

On 1 June 2016 at 13:57, Matthias van der Vlies <
notifications@github.com>
wrote:

Memory is looking good not seeing the 4Kbytes per 2 second, but cpu
is 100%


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<

#66 (comment)

,

or mute the thread

<

https://github.com/notifications/unsubscribe/AETdJQM9Bf5f069NaxbPhlwLpEQh3e39ks5qHXOZgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<

#66 (comment)

,
or mute the thread
<

https://github.com/notifications/unsubscribe/AApMqdELTV_-dtYV1kx-OBGzTA8u5GRvks5qHXVGgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

#66 (comment),
or mute the thread

https://github.com/notifications/unsubscribe/AETdJTzczv8G1tPDK-rrRY2V3GO5ncveks5qHXWqgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AApMqW5UCPwFJ8XrhLEiZZzpkOgAmpTYks5qHZOqgaJpZM4Impfv.

@lcligny
Copy link
Author

lcligny commented Jun 2, 2016

For information, after 12 hours with the last git on my two SIP boxes running captagent, the RES is at 120M and 119M now. It has started like everytime for me at 108M.

@adubovikov
Copy link
Member

so, this means the both memory leaks has been fixed. Correct ?

On 2 June 2016 at 09:04, lcligny notifications@github.com wrote:

For information, after 12 hours with the last git on my two SIP boxes
running captagent, the RES is at 120M and 119M now. It has started like
everytime for me at 108M.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJTACnuX_ngWAxd7rox3An4F4a2Oaks5qHoCEgaJpZM4Impfv
.

@lcligny
Copy link
Author

lcligny commented Jun 2, 2016

For my setup, yes.

@killdashnine
Copy link

The max RES I have is 107M (started at 36M). I'll leave it running another day to see if it grows even bigger. Still seeing the 100% CPU issue with 2 threads running at 100% of which one is doing a Read constantly. Hopefully I will be able to perform some test with that today.

@killdashnine
Copy link

Unfortunately RES is still growing but not as fast as previously, at 172M now. Didn't have a chance to figure out the 100% CPU issue yet.

@adubovikov
Copy link
Member

Matthias, are you sure that you have clean up and replaced all modules ?

On 3 June 2016 at 11:50, Matthias van der Vlies notifications@github.com
wrote:

Unfortunately RES is still growing but not as fast as previously, at 172M
now. Didn't have a chance to figure out the 100% CPU issue yet.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJXbOsNYeKzb1JULu9KFVrh3k5aIFks5qH_jygaJpZM4Impfv
.

@killdashnine
Copy link

Yes, running 'make uninstall' and 'make clean' first before pulling and compiling.

@adubovikov
Copy link
Member

@lcigny, how it looks for you ? Do you still have a memory leak ?

On 3 June 2016 at 12:01, Matthias van der Vlies notifications@github.com
wrote:

Yes, running 'make uninstall' and 'make clean' first before pulling and
compiling.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJacZR6VAfD7rvbV7uz8yfBFEMTQZks5qH_tzgaJpZM4Impfv
.

@lcligny
Copy link
Author

lcligny commented Jun 3, 2016

Nope, I'm still at 120M RES, it doesn't grow since yesterday morning.

@adubovikov
Copy link
Member

Matthias, can you check that after make uninstall all captagent's files
will be removed ?

On 3 June 2016 at 12:28, lcligny notifications@github.com wrote:

Nope, I'm still at 120M RES, it doesn't grow since yesterday morning.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJcyV4FrURBN8t7n6BKoFdfBZExp8ks5qIAHmgaJpZM4Impfv
.

@killdashnine
Copy link

I can confirm that all files were removed, only configuration files were not deleted, but all .so/.a/.la and captagent binary were removed. I can also confirm that make clean deletes all the .so/a/o files from the source directory

@adubovikov
Copy link
Member

Matthias,

can you check it on a test server if you have any ?

Wbr,
Alexandr

On 3 June 2016 at 12:49, Matthias van der Vlies notifications@github.com
wrote:

I can confirm that all files were removed, only configuration files were
not deleted, but all .so/.a/.la and captagent binary were removed. I can
also confirm that make clean deletes all the .so/a/o files from the source
directory


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJTrG0COKxguiw2ccum01lpex0rtUks5qIAbUgaJpZM4Impfv
.

@killdashnine
Copy link

Hi, this actually was a test server, checked it with locate (after running updatedb everytime) to make sure all files were deleted.

@adubovikov
Copy link
Member

any updates ?

On 3 June 2016 at 13:54, Matthias van der Vlies notifications@github.com
wrote:

Hi, this actually was a test server, checked it with locate (after running
updatedb everytime) to make sure all files were deleted.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJV3sfDLrvgPgIQBBa05WK4ShmmPfks5qIBXzgaJpZM4Impfv
.

@lcligny
Copy link
Author

lcligny commented Jun 7, 2016

Still ok and in production for me. Do you want me to close the issue ?

@adubovikov
Copy link
Member

I wanna, but not sure that's going on with Matthias. :-)

On 7 June 2016 at 09:52, lcligny notifications@github.com wrote:

Still ok and in production for me. Do you want me to close the issue ?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJTvRrY3JdSOgj5OW38mSXBEkTN6eks5qJSNYgaJpZM4Impfv
.

@killdashnine
Copy link

Sorry guys, bit busy on my end.

RES is max 267M so I think on high call load it (hash table) extends,
but then doesn't grow back. But at least it looks like it's not leaking
anymore (or not significant at least.)

Still seeing the 100%CPU issue, having a busy week so I hope to get back
to it later this week.

On 06/07/2016 10:18 AM, Alexandr Dubovikov wrote:

I wanna, but not sure that's going on with Matthias. :-)

On 7 June 2016 at 09:52, lcligny notifications@github.com wrote:

Still ok and in production for me. Do you want me to close the issue ?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

#66 (comment),
or mute the thread

https://github.com/notifications/unsubscribe/AETdJTvRrY3JdSOgj5OW38mSXBEkTN6eks5qJSNYgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AApMqSv9wDjQE4CL9ygArtg5jvRFG8qkks5qJSlPgaJpZM4Impfv.

@adubovikov
Copy link
Member

adubovikov commented Jun 7, 2016

ok, thank you, lets close this ticket as solved for memory leak and create a new
one for 100% CPU

On 7 June 2016 at 10:45, Matthias van der Vlies notifications@github.com
wrote:

Sorry guys, bit busy on my end.

RES is max 267M so I think on high call load it (hash table) extends,
but then doesn't grow back. But at least it looks like it's not leaking
anymore (or not significant at least.)

Still seeing the 100%CPU issue, having a busy week so I hope to get back
to it later this week.

On 06/07/2016 10:18 AM, Alexandr Dubovikov wrote:

I wanna, but not sure that's going on with Matthias. :-)

On 7 June 2016 at 09:52, lcligny notifications@github.com wrote:

Still ok and in production for me. Do you want me to close the issue ?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub

<
https://github.com/sipcapture/captagent/issues/66#issuecomment-224206475>,
or mute the thread

<
https://github.com/notifications/unsubscribe/AETdJTvRrY3JdSOgj5OW38mSXBEkTN6eks5qJSNYgaJpZM4Impfv

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<
https://github.com/sipcapture/captagent/issues/66#issuecomment-224211734>,
or mute the thread
<
https://github.com/notifications/unsubscribe/AApMqSv9wDjQE4CL9ygArtg5jvRFG8qkks5qJSlPgaJpZM4Impfv
.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#66 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/AETdJTBESSN1lr50iXK4nGAsvKFMsa8sks5qJS-3gaJpZM4Impfv
.

@killdashnine
Copy link

I have opened #70 for the CPU usage issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants