Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More details around failure reasons. #5

Merged
merged 5 commits into from
Aug 11, 2016
Merged

Conversation

RomanDzhabarov
Copy link
Member

No description provided.

@mattklein123
Copy link
Member

👍

@RomanDzhabarov RomanDzhabarov merged commit 5ba2ffa into master Aug 11, 2016
@mattklein123 mattklein123 deleted the more_details branch August 11, 2016 21:51
@RomanDzhabarov RomanDzhabarov restored the more_details branch August 11, 2016 23:14
@RomanDzhabarov RomanDzhabarov deleted the more_details branch August 17, 2016 18:55
@hzariv hzariv mentioned this pull request Jun 8, 2017
JimmyCYJ pushed a commit to JimmyCYJ/envoy that referenced this pull request Jun 7, 2018
…base

Merge pull request envoyproxy#4 from mangchiandjjoe/sds_dynamic_secret
rshriram pushed a commit to rshriram/envoy that referenced this pull request Oct 30, 2018
Create README.md for endpoints code dump
rshriram pushed a commit to rshriram/envoy that referenced this pull request Oct 30, 2018
RyanTheOptimist pushed a commit that referenced this pull request Jan 10, 2024
Commit Message: the probing socket is released when port migration fails. If this happens in response to an incoming packet during an I/O event, the follow socket read could cause use-after-free.

[2024-01-08 16:30:53.386][12][critical][backtrace] [./source/server/backtrace.h:104] Caught Segmentation fault, suspect faulting address 0x0
[2024-01-08 16:30:53.387][12][critical][backtrace] [./source/server/backtrace.h:91] Backtrace (use tools/stack_decode.py to get line numbers):
[2024-01-08 16:30:53.387][12][critical][backtrace] [./source/server/backtrace.h:92] Envoy version: 0/1.29.0-dev/test/DEBUG/BoringSSL
[2024-01-08 16:30:53.413][12][critical][backtrace] [./source/server/backtrace.h:96] #0: Envoy::SignalAction::sigHandler() [0x55bb876d499e]
[2024-01-08 16:30:53.413][12][critical][backtrace] [./source/server/backtrace.h:98] #1: [0x7f55fbf92510]
[2024-01-08 16:30:53.440][12][critical][backtrace] [./source/server/backtrace.h:96] #2: Envoy::Network::Utility::readPacketsFromSocket() [0x55bb875de0ef]
[2024-01-08 16:30:53.466][12][critical][backtrace] [./source/server/backtrace.h:96] #3: Envoy::Quic::EnvoyQuicClientConnection::onFileEvent() [0x55bb8663e1eb]
[2024-01-08 16:30:53.492][12][critical][backtrace] [./source/server/backtrace.h:96] #4: Envoy::Quic::EnvoyQuicClientConnection::setUpConnectionSocket()::$_0::operator()() [0x55bb8663f192]
[2024-01-08 16:30:53.518][12][critical][backtrace] [./source/server/backtrace.h:96] #5: std::__invoke_impl<>() [0x55bb8663f151]
[2024-01-08 16:30:53.544][12][critical][backtrace] [./source/server/backtrace.h:96] #6: std::__invoke_r<>() [0x55bb8663f0e2]
[2024-01-08 16:30:53.569][12][critical][backtrace] [./source/server/backtrace.h:96] #7: std::_Function_handler<>::_M_invoke() [0x55bb8663efc2]
[2024-01-08 16:30:53.595][12][critical][backtrace] [./source/server/backtrace.h:96] #8: std::function<>::operator()() [0x55bb85cb8f44]
[2024-01-08 16:30:53.621][12][critical][backtrace] [./source/server/backtrace.h:96] #9: Envoy::Event::DispatcherImpl::createFileEvent()::$_5::operator()() [0x55bb8722560f]
[2024-01-08 16:30:53.648][12][critical][backtrace] [./source/server/backtrace.h:96] #10: std::__invoke_impl<>() [0x55bb872255c1]
[2024-01-08 16:30:53.674][12][critical][backtrace] [./source/server/backtrace.h:96] #11: std::__invoke_r<>() [0x55bb87225562]
[2024-01-08 16:30:53.700][12][critical][backtrace] [./source/server/backtrace.h:96] #12: std::_Function_handler<>::_M_invoke() [0x55bb872253e2]
[2024-01-08 16:30:53.700][12][critical][backtrace] [./source/server/backtrace.h:96] #13: std::function<>::operator()() [0x55bb85cb8f44]
[2024-01-08 16:30:53.726][12][critical][backtrace] [./source/server/backtrace.h:96] #14: Envoy::Event::FileEventImpl::mergeInjectedEventsAndRunCb() [0x55bb872358ec]
[2024-01-08 16:30:53.752][12][critical][backtrace] [./source/server/backtrace.h:96] #15: Envoy::Event::FileEventImpl::assignEvents()::$_1::operator()() [0x55bb87235ed1]
[2024-01-08 16:30:53.778][12][critical][backtrace] [./source/server/backtrace.h:96] #16: Envoy::Event::FileEventImpl::assignEvents()::$_1::__invoke() [0x55bb87235949]
[2024-01-08 16:30:53.804][12][critical][backtrace] [./source/server/backtrace.h:96] #17: event_persist_closure [0x55bb87fab72b]
[2024-01-08 16:30:53.830][12][critical][backtrace] [./source/server/backtrace.h:96] #18: event_process_active_single_queue [0x55bb87faada2]
[2024-01-08 16:30:53.856][12][critical][backtrace] [./source/server/backtrace.h:96] #19: event_process_active [0x55bb87fa56c8]
[2024-01-08 16:30:53.882][12][critical][backtrace] [./source/server/backtrace.h:96] #20: event_base_loop [0x55bb87fa45cc]
[2024-01-08 16:30:53.908][12][critical][backtrace] [./source/server/backtrace.h:96] #21: Envoy::Event::LibeventScheduler::run() [0x55bb8760a59f]
Risk Level: low
Testing: new unit test
Docs Changes: N/A
Release Notes: Yes
Platform Specific Features: N/A

Signed-off-by: Dan Zhang <danzh@google.com>
Co-authored-by: Dan Zhang <danzh@google.com>
nezdolik pushed a commit to nezdolik/envoy that referenced this pull request May 4, 2024
…_cache.cc

This avoids a deadlock when a library which is being dlopen'ed creates
as part of its static constructors a thread which quickly need to call
malloc. We are still in the dlopen call (so with some internal glibc
mutex taken) when the thread executes code and later needs to call
malloc which in term calls tls_get_addr_tail, which wait for the dlopen
mutex to be unlocked. If later the dlopen'ing thread also calls malloc
as part of its constructors, we are in a deadlock.

Fix is similar to
gperftools/gperftools@7852eeb

Stack of the dlopening thread:
    #0  0x00007fd5406ca93c in __lll_lock_wait () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libpthread.so.0
    #1  0x00007fd5406c45a5 in pthread_mutex_lock () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libpthread.so.0
    ... proprietary code in the stack
    envoyproxy#9  0x00007fd5074f0367 in __static_initialization_and_destruction_0 (__initialize_p=1, __priority=65535) at src/ClientImpl.cpp:15
    envoyproxy#10 0x00007fd5074f06d7 in _GLOBAL__sub_I_ClientImpl.cpp(void) () at src/ClientImpl.cpp:85
    envoyproxy#11 0x00007fd50757aa46 in __do_global_ctors_aux ()
    envoyproxy#12 0x00007fd5073e985f in _init () from ...
    envoyproxy#13 0x00007fd53bf9dec8 in ?? () from ...
    envoyproxy#14 0x00007fd54d637a5d in call_init.part () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/ld-linux-x86-64.so.2
    envoyproxy#15 0x00007fd54d637bab in _dl_init () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/ld-linux-x86-64.so.2
    envoyproxy#16 0x00007fd54d63c160 in dl_open_worker () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/ld-linux-x86-64.so.2
    envoyproxy#17 0x00007fd54d637944 in _dl_catch_error () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/ld-linux-x86-64.so.2
    envoyproxy#18 0x00007fd54d63b7d9 in _dl_open () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/ld-linux-x86-64.so.2
    envoyproxy#19 0x00007fd54d61f2b9 in dlopen_doit () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libdl.so.2
    envoyproxy#20 0x00007fd54d637944 in _dl_catch_error () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/ld-linux-x86-64.so.2
    envoyproxy#21 0x00007fd54d61f889 in _dlerror_run () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libdl.so.2
    envoyproxy#22 0x00007fd54d61f351 in dlopen@@GLIBC_2.2.5 () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libdl.so.2

Stack of the newly created thread calling tls_get_addr_tail:
    #0  0x00007fd5406ca93c in __lll_lock_wait () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libpthread.so.0
    #1  0x00007fd5406c4622 in pthread_mutex_lock () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libpthread.so.0
    envoyproxy#2  0x00007fd54d63a2ed in tls_get_addr_tail () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/ld-linux-x86-64.so.2
    envoyproxy#3  0x00007fd53fee877d in tcmalloc::ThreadCache::CreateCacheIfNecessary () at src/thread_cache.cc:344
    envoyproxy#4  0x00007fd53fecb4ab in tcmalloc::ThreadCache::GetCache () at src/thread_cache.h:437
    envoyproxy#5  0x00007fd53fefeccb in (anonymous namespace)::do_malloc (size=56) at src/tcmalloc.cc:1354
    envoyproxy#6  tcmalloc::do_allocate_full<tcmalloc::cpp_throw_oom> (size=56) at src/tcmalloc.cc:1762
    envoyproxy#7  tcmalloc::allocate_full_cpp_throw_oom (size=56) at src/tcmalloc.cc:1776
    envoyproxy#8  0x00007fd53ff01b80 in tcmalloc::dispatch_allocate_full<tcmalloc::cpp_throw_oom> (size=56) at src/tcmalloc.cc:1785
    envoyproxy#9  malloc_fast_path<tcmalloc::cpp_throw_oom> (size=56) at src/tcmalloc.cc:1845
    envoyproxy#10 tc_new (size=56) at src/tcmalloc.cc:1980
    ... proprietary code in the stack
    envoyproxy#26 0x00007fd5406c1ef4 in start_thread () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libpthread.so.0
    envoyproxy#27 0x00007fd5403ba01d in clone () from /data3/mwrep/rgeissler/core.tls/opt/1A/toolchain/x86_64-2.6.32-v2/lib64/libc.so.6
arminabf pushed a commit to arminabf/envoy that referenced this pull request Jun 5, 2024
ggreenway pushed a commit that referenced this pull request Aug 1, 2024
When the ``AsyncTcpClient`` is being destroyed but it
also has an active client connection, there's a crash since during the
instance destruction, the ``ClientConnection`` object would also be
destroyed, causing ``raiseEvent`` to be called back to
``AsyncTcpClient`` while it is being destroyed

Caught with the following stack trace:
```
Caught Segmentation fault, suspect faulting address 0x0
Backtrace (use tools/stack_decode.py to get line numbers):
Envoy version: ee8c765a07037033766ea556c032120b497152b3/1.27.0/Clean/RELEASE/BoringSSL
#0: __restore_rt [0x7d80ab903420]
#1: Envoy::Extensions::AccessLoggers::Fluentd::FluentdAccessLoggerImpl::onEvent() [0x58313528746b]
#2: Envoy::Tcp::AsyncTcpClientImpl::onEvent() [0x5831359da00a]
#3: Envoy::Network::ConnectionImplBase::raiseConnectionEvent() [0x583135f0521d]
#4: Envoy::Network::ConnectionImpl::raiseEvent() [0x583135e9fed9]
#5: Envoy::Network::ConnectionImpl::closeSocket() [0x583135e9f90c]
#6: Envoy::Network::ConnectionImpl::close() [0x583135e9e54c]
#7: Envoy::Network::ConnectionImpl::~ConnectionImpl() [0x583135e9de5c]
#8: Envoy::Network::ClientConnectionImpl::~ClientConnectionImpl() [0x5831355fd25e]
#9: Envoy::Tcp::AsyncTcpClientImpl::~AsyncTcpClientImpl() [0x5831359da247]
#10: Envoy::Extensions::AccessLoggers::Fluentd::FluentdAccessLoggerImpl::~FluentdAccessLoggerImpl() [0x583135289350]
#11: Envoy::Extensions::AccessLoggers::Fluentd::FluentdAccessLog::ThreadLocalLogger::~ThreadLocalLogger() [0x58313528adbf]
#12: Envoy::ThreadLocal::InstanceImpl::shutdownThread() [0x58313560373a]
#13: Envoy::Server::WorkerImpl::threadRoutine() [0x583135630c0a]
#14: Envoy::Thread::ThreadImplPosix::ThreadImplPosix()::{lambda()#1}::__invoke() [0x5831364e88d5]
#15: start_thread [0x7d80ab8f7609]
```
Risk Level: low
Testing: unit tests
Docs Changes: none
Release Notes: none
Platform Specific Features: none

---------

Signed-off-by: Ohad Vano <ohadvano@gmail.com>
martinduke pushed a commit to martinduke/envoy that referenced this pull request Aug 8, 2024
…yproxy#35410)

When the ``AsyncTcpClient`` is being destroyed but it
also has an active client connection, there's a crash since during the
instance destruction, the ``ClientConnection`` object would also be
destroyed, causing ``raiseEvent`` to be called back to
``AsyncTcpClient`` while it is being destroyed

Caught with the following stack trace:
```
Caught Segmentation fault, suspect faulting address 0x0
Backtrace (use tools/stack_decode.py to get line numbers):
Envoy version: ee8c765a07037033766ea556c032120b497152b3/1.27.0/Clean/RELEASE/BoringSSL
#0: __restore_rt [0x7d80ab903420]
envoyproxy#1: Envoy::Extensions::AccessLoggers::Fluentd::FluentdAccessLoggerImpl::onEvent() [0x58313528746b]
envoyproxy#2: Envoy::Tcp::AsyncTcpClientImpl::onEvent() [0x5831359da00a]
envoyproxy#3: Envoy::Network::ConnectionImplBase::raiseConnectionEvent() [0x583135f0521d]
envoyproxy#4: Envoy::Network::ConnectionImpl::raiseEvent() [0x583135e9fed9]
envoyproxy#5: Envoy::Network::ConnectionImpl::closeSocket() [0x583135e9f90c]
envoyproxy#6: Envoy::Network::ConnectionImpl::close() [0x583135e9e54c]
envoyproxy#7: Envoy::Network::ConnectionImpl::~ConnectionImpl() [0x583135e9de5c]
envoyproxy#8: Envoy::Network::ClientConnectionImpl::~ClientConnectionImpl() [0x5831355fd25e]
envoyproxy#9: Envoy::Tcp::AsyncTcpClientImpl::~AsyncTcpClientImpl() [0x5831359da247]
envoyproxy#10: Envoy::Extensions::AccessLoggers::Fluentd::FluentdAccessLoggerImpl::~FluentdAccessLoggerImpl() [0x583135289350]
envoyproxy#11: Envoy::Extensions::AccessLoggers::Fluentd::FluentdAccessLog::ThreadLocalLogger::~ThreadLocalLogger() [0x58313528adbf]
envoyproxy#12: Envoy::ThreadLocal::InstanceImpl::shutdownThread() [0x58313560373a]
envoyproxy#13: Envoy::Server::WorkerImpl::threadRoutine() [0x583135630c0a]
envoyproxy#14: Envoy::Thread::ThreadImplPosix::ThreadImplPosix()::{lambda()envoyproxy#1}::__invoke() [0x5831364e88d5]
envoyproxy#15: start_thread [0x7d80ab8f7609]
```
Risk Level: low
Testing: unit tests
Docs Changes: none
Release Notes: none
Platform Specific Features: none

---------

Signed-off-by: Ohad Vano <ohadvano@gmail.com>
Signed-off-by: Martin Duke <martin.h.duke@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants