Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when connecting from client using WS spec revision 0 #1

Closed
dtgriscom opened this issue Jun 14, 2013 · 1 comment
Closed

Comments

@dtgriscom
Copy link

[I'm posting here because libwebsockets.org seems to be down.]

Each time I connect to my WS server using Safari 5.1.9 under OS X 10.6.8, Libwebsockets complains "WARN: Unknown client spec version 0". That's fine, but when I quit after running using Valgrind, Valgrind complains about leaked memory.

Here's the output from a run with four failed connections:

valgrind --dsymutil=yes --leak-check=full ./server
==49568== Memcheck, a memory error detector
==49568== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==49568== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==49568== Command: ./server
==49568==
[1371238824:2459] NOTICE: Initial logging level 7
[1371238824:2677] NOTICE: Library version: 1.3 7cf6cb0
[1371238824:2720] NOTICE: Started with daemon pid 0
[1371238824:2748] NOTICE: static allocation: 4472 + (16 x 256 fds) = 8568 bytes
[1371238824:2796] NOTICE: canonical_hostname = octobanger
[1371238824:2805] NOTICE: Compiled without SSL support
[1371238824:2813] NOTICE: per-conn mem: 160 + 1360 headers + protocol rx buf
[1371238824:2951] NOTICE: Listening on port 7681
[1371238834:8789] WARN: Unknown client spec version 0
[1371238834:8970] WARN: Unknown client spec version 0
[1371238838:0604] WARN: Unknown client spec version 0
[1371238838:0718] WARN: Unknown client spec version 0

Cleaning up...
==49568==
==49568== HEAP SUMMARY:
==49568== in use at exit: 17,626 bytes in 40 blocks
==49568== total heap usage: 103 allocs, 63 frees, 28,746 bytes allocated
==49568==
==49568== 5,440 bytes in 4 blocks are definitely lost in loss record 37 of 37
==49568== at 0x100040C16: malloc (vg_replace_malloc.c:274)
==49568== by 0x100055C45: lws_allocate_header_table (in /usr/local/lib/libwebsockets.4.0.0.dylib)
==49568== by 0x100058D46: libwebsocket_create_new_server_wsi (in /usr/local/lib/libwebsockets.4.0.0.dylib)
==49568== by 0x100058F3A: lws_server_socket_service (in /usr/local/lib/libwebsockets.4.0.0.dylib)
==49568== by 0x100053DC8: libwebsocket_service_fd (in /usr/local/lib/libwebsockets.4.0.0.dylib)
==49568== by 0x1000540D7: libwebsocket_service (in /usr/local/lib/libwebsockets.4.0.0.dylib)
==49568== by 0x1000039DE: wsPoll() (wsServer.cpp:133)
==49568== by 0x100000A48: main (main.cpp:73)
==49568==
==49568== LEAK SUMMARY:
==49568== definitely lost: 5,440 bytes in 4 blocks
==49568== indirectly lost: 0 bytes in 0 blocks
==49568== possibly lost: 0 bytes in 0 blocks
==49568== still reachable: 12,098 bytes in 35 blocks
==49568== suppressed: 88 bytes in 1 blocks
==49568== Reachable blocks (those to which a pointer was found) are not shown.
==49568== To see them, rerun with: --leak-check=full --show-reachable=yes
==49568==
==49568== For counts of detected and suppressed errors, rerun with: -v
==49568== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)

The problem: the header table isn't being freed. The fix is to make a change in handshake.c:230, changing:

default:
    lwsl_warn("Unknown client spec version %d\n",
                       wsi->ietf_spec_revision);
    goto bail;
}

to:

default:
    lwsl_warn("Unknown client spec version %d\n",
                       wsi->ietf_spec_revision);
    goto bail_nuke_ah;
}

The "bail_nuke_ah" label adds a free() of the header table:

bail_nuke_ah:
    /* drop the header info */
    if (wsi->u.hdr.ah)
        free(wsi->u.hdr.ah);
bail:
    lwsl_info("closing connection at libwebsocket_read bail:\n");

Note that the "goto bail"s and "goto bail_nuke_ah"s seem to be used almost interchangeably; someone should review them to make sure the right choices are being made.

@warmcat
Copy link
Collaborator

warmcat commented Oct 13, 2015

Thanks... this area got refactordd a while back, there's a point where the main flow frees the headers which is commented, every other path uses bail_nuke_ah now.

@warmcat warmcat closed this as completed Oct 13, 2015
@fog1111 fog1111 mentioned this issue Nov 24, 2017
Merkyrio pushed a commit to liveryvideo/libwebsockets that referenced this issue May 2, 2024
There is potential deadlock between lws_create_context and lws_service.
This TSAN message would give more details (proprietary code backtrace was stripped):

==================
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=516380)
  Cycle in lock order graph: M0 (0x7bb4000016b8) => M1 (0x7bb400000360) => M0

  Mutex M1 acquired here while holding mutex M0 in thread T2:
    #0 pthread_mutex_lock
    warmcat#1 lws_mutex_refcount_lock libwebsockets/lib/core/libwebsockets.c:1471:2 (libwebsockets.so.19+0x48ff2)
    warmcat#2 lws_wsi_inject_to_loop libwebsockets/lib/core-net/wsi.c:354:2 (libwebsockets.so.19+0xbf8d7)
    warmcat#3 __lws_create_event_pipes libwebsockets/lib/core-net/vhost.c:1172:8 (libwebsockets.so.19+0xb490f)
    warmcat#4 lws_create_context libwebsockets/lib/core/context.c:1302:6 (libwebsockets.so.19+0x3e592)

  Mutex M0 previously acquired by the same thread here:
    #0 pthread_mutex_lock
    warmcat#1 lws_mutex_refcount_lock libwebsockets/lib/core/libwebsockets.c:1471:2 (libwebsockets.so.19+0x48ff2)
    warmcat#2 lws_create_context libwebsockets/lib/core/context.c:1301:2 (libwebsockets.so.19+0x3e580)

  Mutex M0 acquired here while holding mutex M1 in thread T8:
    #0 pthread_mutex_lock
    warmcat#1 lws_mutex_refcount_lock libwebsockets/lib/core/libwebsockets.c:1471:2 (libwebsockets.so.19+0x48ff2)
    warmcat#2 lws_sul_plat_unix libwebsockets/lib/plat/unix/unix-init.c:64:2 (libwebsockets.so.19+0x1e4a1)
    warmcat#3 __lws_sul_service_ripe libwebsockets/lib/core-net/sorted-usec-list.c:161:3 (libwebsockets.so.19+0xbd871)
    warmcat#4 _lws_plat_service_tsi libwebsockets/lib/plat/unix/unix-service.c:125:7 (libwebsockets.so.19+0x1fc07)
    warmcat#5 lws_plat_service libwebsockets/lib/plat/unix/unix-service.c:235:9 (libwebsockets.so.19+0x20237)
    warmcat#6 lws_service libwebsockets/lib/core-net/service.c:838:6 (libwebsockets.so.19+0xbd0c2)

  Mutex M1 previously acquired by the same thread here:
    #0 pthread_mutex_lock
    warmcat#1 lws_mutex_refcount_lock libwebsockets/lib/core/libwebsockets.c:1471:2 (libwebsockets.so.19+0x48ff2)
    warmcat#2 _lws_plat_service_tsi libwebsockets/lib/plat/unix/unix-service.c:121:2 (libwebsockets.so.19+0x1fbee)
    warmcat#3 lws_plat_service libwebsockets/lib/plat/unix/unix-service.c:235:9 (libwebsockets.so.19+0x20237)
    warmcat#4 lws_service libwebsockets/lib/core-net/service.c:838:6 (libwebsockets.so.19+0xbd0c2)

SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_mutex_lock
==================
Merkyrio pushed a commit to liveryvideo/libwebsockets that referenced this issue May 2, 2024
There is potential deadlock between lws_create_context and lws_service.
This TSAN message would give more details (proprietary code backtrace was stripped):

==================
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=516380)
  Cycle in lock order graph: M0 (0x7bb4000016b8) => M1 (0x7bb400000360) => M0

  Mutex M1 acquired here while holding mutex M0 in thread T2:
    #0 pthread_mutex_lock
    warmcat#1 lws_mutex_refcount_lock libwebsockets/lib/core/libwebsockets.c:1471:2 (libwebsockets.so.19+0x48ff2)
    warmcat#2 lws_wsi_inject_to_loop libwebsockets/lib/core-net/wsi.c:354:2 (libwebsockets.so.19+0xbf8d7)
    warmcat#3 __lws_create_event_pipes libwebsockets/lib/core-net/vhost.c:1172:8 (libwebsockets.so.19+0xb490f)
    warmcat#4 lws_create_context libwebsockets/lib/core/context.c:1302:6 (libwebsockets.so.19+0x3e592)

  Mutex M0 previously acquired by the same thread here:
    #0 pthread_mutex_lock
    warmcat#1 lws_mutex_refcount_lock libwebsockets/lib/core/libwebsockets.c:1471:2 (libwebsockets.so.19+0x48ff2)
    warmcat#2 lws_create_context libwebsockets/lib/core/context.c:1301:2 (libwebsockets.so.19+0x3e580)

  Mutex M0 acquired here while holding mutex M1 in thread T8:
    #0 pthread_mutex_lock
    warmcat#1 lws_mutex_refcount_lock libwebsockets/lib/core/libwebsockets.c:1471:2 (libwebsockets.so.19+0x48ff2)
    warmcat#2 lws_sul_plat_unix libwebsockets/lib/plat/unix/unix-init.c:64:2 (libwebsockets.so.19+0x1e4a1)
    warmcat#3 __lws_sul_service_ripe libwebsockets/lib/core-net/sorted-usec-list.c:161:3 (libwebsockets.so.19+0xbd871)
    warmcat#4 _lws_plat_service_tsi libwebsockets/lib/plat/unix/unix-service.c:125:7 (libwebsockets.so.19+0x1fc07)
    warmcat#5 lws_plat_service libwebsockets/lib/plat/unix/unix-service.c:235:9 (libwebsockets.so.19+0x20237)
    warmcat#6 lws_service libwebsockets/lib/core-net/service.c:838:6 (libwebsockets.so.19+0xbd0c2)

  Mutex M1 previously acquired by the same thread here:
    #0 pthread_mutex_lock
    warmcat#1 lws_mutex_refcount_lock libwebsockets/lib/core/libwebsockets.c:1471:2 (libwebsockets.so.19+0x48ff2)
    warmcat#2 _lws_plat_service_tsi libwebsockets/lib/plat/unix/unix-service.c:121:2 (libwebsockets.so.19+0x1fbee)
    warmcat#3 lws_plat_service libwebsockets/lib/plat/unix/unix-service.c:235:9 (libwebsockets.so.19+0x20237)
    warmcat#4 lws_service libwebsockets/lib/core-net/service.c:838:6 (libwebsockets.so.19+0xbd0c2)

SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) in pthread_mutex_lock
==================
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant