Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build error #8

Closed
michalkjp opened this issue Feb 20, 2010 · 2 comments
Closed

build error #8

michalkjp opened this issue Feb 20, 2010 · 2 comments

Comments

@michalkjp
Copy link

Hi,

I get this error while building

[ 13%] Building CXX object src/CMakeFiles/hphp_runtime_static.dir/cpp/base/program_functions.cpp.o
In file included from /home/michal/tmp/hiphop/hiphop-php/src/cpp/base/server/server_stats.h:22,
from /home/michal/tmp/hiphop/hiphop-php/src/cpp/base/program_functions.cpp:29:
/home/michal/tmp/hiphop/hiphop-php/src/cpp/base/shared/shared_string.h:42: error: wrong number of template arguments (2, should be 4)
/usr/include/tbb/concurrent_hash_map.h:48: error: provided for ‘template<class Key, class T, class HashCompare, class A> class tbb::concurrent_hash_map’
/home/michal/tmp/hiphop/hiphop-php/src/cpp/base/shared/shared_string.h:43: error: ‘InternMap’ is not a class or namespace
/home/michal/tmp/hiphop/hiphop-php/src/cpp/base/shared/shared_string.h:43: error: expected ‘,’ or ‘...’ before ‘&’ token
make[2]: *** [src/CMakeFiles/hphp_runtime_static.dir/cpp/base/program_functions.cpp.o] Błąd 1
make[1]: *** [src/CMakeFiles/hphp_runtime_static.dir/all] Błąd 2

BTW.
There is an additional re2c "-c" option in src/third_party/xhp/xhp/CMakeFiles/xhp.dir/build.make on line that is not recognized by my re2c.

@japonicus
Copy link

I had a similar problem. You'll need to use a more recent version of the Intel Thread Building Blocks (TBB) from http://www.threadingbuildingblocks.org/ver.php?fid=142

@michalkjp
Copy link
Author

It works. Thanks!

paroski added a commit that referenced this issue Nov 14, 2013
We had a bug where a PHP program can infinitely recurse in a certain way
that doesn't hit any of our native stack overflow checks and causes the
process to segfault. Below I've included a snippet of the callstack that
caused HHVM to crash.

The fix is to make invokeFunc(), invokeFuncFew(), and invokeContFunc()
unconditionally perform a stack overflow check for the native stack. I
tried to keep the fix minimal and non-invasive so that we can get this
hotfixed if needed.

  #0  0x00000000032ba2ef in malloc ()
  #1  0x000000000257401e in HPHP::Util::canonicalize(char const*, unsigned long, bool) ()
  #2  0x0000000001c15f0e in HPHP::resolve_include(HPHP::String const&, char const*, bool (*)(HPHP::String const&, void*), void*) ()
  #3  0x0000000001bd0327 in HPHP::Eval::resolveVmInclude(HPHP::StringData*, char const*, stat*) ()
  #4  0x0000000001fe47d0 in HPHP::VMExecutionContext::lookupPhpFile(HPHP::StringData*, char const*, bool*) ()
  #5  0x0000000001fe521a in HPHP::VMExecutionContext::evalInclude(HPHP::StringData*, HPHP::StringData const*, bool*) ()
  #6  0x0000000001c17231 in HPHP::AutoloadHandler::Result HPHP::AutoloadHandler::loadFromMap<HPHP::ConstantExistsChecker>(HPHP::String const&, HPHP::String const&, bool, HPHP::ConstantExistsChecker const&) ()
  #7  0x0000000001c174ed in HPHP::AutoloadHandler::autoloadConstant(HPHP::StringData*) ()
  #8  0x00000000020c1c73 in HPHP::Unit::loadCns(HPHP::StringData const*) ()
  #9  0x0000000001f1d10e in HPHP::Transl::lookupCnsHelper(HPHP::TypedValue const*, HPHP::StringData*, bool) ()
  #10 0x000000002827f855 in ?? ()
  #11 0x00000000026e566e in enterTCHelper ()
  #12 0x0000000001f35ef0 in HPHP::Transl::TranslatorX64::enterTC(unsigned char*, void*) ()
  #13 0x0000000002018004 in HPHP::VMExecutionContext::enterVM(HPHP::TypedValue*, HPHP::ActRec*) ()
  #14 0x000000000201821c in HPHP::VMExecutionContext::reenterVM(HPHP::TypedValue*, HPHP::ActRec*, HPHP::TypedValue*) ()
  #15 0x0000000002018612 in HPHP::VMExecutionContext::invokeFunc(HPHP::TypedValue*, HPHP::Func const*, HPHP::Array const&, HPHP::ObjectData*, HPHP::Class*, HPHP::VarEnv*, HPHP::StringData*, HPHP::VMExecutionContext::InvokeFlags) ()
  #16 0x0000000001c146d0 in HPHP::vm_call_user_func(HPHP::Variant const&, HPHP::Array const&, bool) ()
  #17 0x0000000001c17107 in HPHP::AutoloadHandler::Result HPHP::AutoloadHandler::loadFromMap<HPHP::ConstantExistsChecker>(HPHP::String const&, HPHP::String const&, bool, HPHP::ConstantExistsChecker const&) ()
  #18 0x0000000001c174ed in HPHP::AutoloadHandler::autoloadConstant(HPHP::StringData*) ()
  #19 0x00000000020c1c73 in HPHP::Unit::loadCns(HPHP::StringData const*) ()
  #20 0x0000000001f1d10e in HPHP::Transl::lookupCnsHelper(HPHP::TypedValue const*, HPHP::StringData*, bool) ()
  #21 0x000000002827f855 in ?? ()
  #22 0x00000000026e566e in enterTCHelper ()
  #23 0x0000000001f35ef0 in HPHP::Transl::TranslatorX64::enterTC(unsigned char*, void*) ()
  #24 0x0000000002018004 in HPHP::VMExecutionContext::enterVM(HPHP::TypedValue*, HPHP::ActRec*) ()
  ...
  #28992 0x0000000001c174ed in HPHP::AutoloadHandler::autoloadConstant(HPHP::StringData*) ()
  #28993 0x00000000020c1c73 in HPHP::Unit::loadCns(HPHP::StringData const*) ()
  #28994 0x0000000001f1d10e in HPHP::Transl::lookupCnsHelper(HPHP::TypedValue const*, HPHP::StringData*, bool) ()
  #28995 0x000000002827f855 in ?? ()
  #28996 0x00000000026e566e in enterTCHelper ()
  #28997 0x0000000001f35ef0 in HPHP::Transl::TranslatorX64::enterTC(unsigned char*, void*) ()
  #28998 0x0000000002018004 in HPHP::VMExecutionContext::enterVM(HPHP::TypedValue*, HPHP::ActRec*) ()
  #28999 0x000000000201821c in HPHP::VMExecutionContext::reenterVM(HPHP::TypedValue*, HPHP::ActRec*, HPHP::TypedValue*) ()
  #29000 0x0000000002018612 in HPHP::VMExecutionContext::invokeFunc(HPHP::TypedValue*, HPHP::Func const*, HPHP::Array const&, HPHP::ObjectData*, HPHP::Class*, HPHP::VarEnv*, HPHP::StringData*, HPHP::VMExecutionContext::InvokeFlags) ()
  #29001 0x0000000001c146d0 in HPHP::vm_call_user_func(HPHP::Variant const&, HPHP::Array const&, bool) ()
  #29002 0x0000000001c17676 in HPHP::AutoloadHandler::Result HPHP::AutoloadHandler::loadFromMap<HPHP::ClassExistsChecker>(HPHP::String const&, HPHP::String const&, bool, HPHP::ClassExistsChecker const&) ()
  #29003 0x0000000001c17a57 in HPHP::AutoloadHandler::invokeHandler(HPHP::String const&, bool) ()
  #29004 0x00000000020cb504 in HPHP::Unit::loadClass(HPHP::NamedEntity const*, HPHP::StringData const*) ()
  #29005 0x000000000203247e in void HPHP::VMExecutionContext::dispatchImpl<2>(int) ()
  #29006 0x000000000203ef43 in HPHP::VMExecutionContext::dispatchBB() ()
  #29007 0x0000000001f35f70 in HPHP::Transl::TranslatorX64::enterTC(unsigned char*, void*) ()
  #29008 0x0000000002017ed7 in HPHP::VMExecutionContext::enterVM(HPHP::TypedValue*, HPHP::ActRec*) ()
  #29009 0x000000000201888d in HPHP::VMExecutionContext::invokeFunc(HPHP::TypedValue*, HPHP::Func const*, HPHP::Array const&, HPHP::ObjectData*, HPHP::Class*, HPHP::VarEnv*, HPHP::StringData*, HPHP::VMExecutionContext::InvokeFlags) ()
  #29010 0x0000000002018ea0 in HPHP::VMExecutionContext::invokeUnit(HPHP::TypedValue*, HPHP::Unit*) ()
  #29011 0x0000000001c15964 in HPHP::invoke_file(HPHP::String const&, bool, char const*) ()
  #29012 0x0000000001c19a52 in HPHP::include_impl_invoke(HPHP::String const&, bool, char const*) ()
  #29013 0x0000000001c5d883 in HPHP::hphp_invoke(HPHP::ExecutionContext*, std::basic_fbstring<char, std::char_traits<char>, std::allocator<char>, std::fbstring_core<char> > const&, bool, HPHP::Array const&, HPHP::VRefParamValue const&, std::basic_fbstring<char, std::char_traits<char>, std::allocator<char>, std::fbstring_core<char> > const&, std::basic_fbstring<char, std::char_traits<char>, std::allocator<char>, std::fbstring_core<char> > const&, bool&, std::basic_fbstring<char, std::char_traits<char>, std::allocator<char>, std::fbstring_core<char> >&, bool, bool, bool) ()
  #29014 0x0000000001b2659e in HPHP::RPCRequestHandler::executePHPFunction(HPHP::Transport*, HPHP::SourceRootInfo&, HPHP::RPCRequestHandler::ReturnEncodeType) ()
  #29015 0x0000000001b285d3 in HPHP::RPCRequestHandler::handleRequest(HPHP::Transport*) ()
  #29016 0x0000000001b6b85b in HPHP::XboxWorker::doJob(HPHP::XboxTransport*) ()
  #29017 0x0000000001b669a3 in HPHP::JobQueueWorker<HPHP::XboxTransport*, HPHP::Server*, true, false, HPHP::JobQueueDropVMStack>::start() ()
  #29018 0x000000000252c127 in HPHP::AsyncFuncImpl::ThreadFunc(void*) ()
  #29019 0x00007f1ce3787f88 in start_thread (arg=0x7f1c377ff700)

Reviewed By: @jdelong

Differential Revision: D1052293
rrh referenced this issue Aug 1, 2014
Summary: 1. Extended the memory protector to a general system health monitor,
which takes multiple system metrics into account. The metrics that
are considered by the monitor can be added by creating corresponding
classes (implementing the IHealthMonitorMetric interface, see
'hphp/facebook/runtime/health-monitor/health-metrics.h'), and then
registering it to the health monitor.

2. The health monitor will periodically update all metrics and then
calculate a unified current system health level, then broadcast it
to all observers. Objects that implement IHostHealthObserver interface
can subscribe the system health level by registering themselves as
observers of the system health monitor.

3. Health monitor outputs are exposed to external systems by notifying
the "external-clients-shim" proxy. Currently, it is send to stats.kvp
on port 8088.

The display on Phabricator is a little messy:

— "hphp/util/health-monitor-types.h" is newly added, contents in it
(several interfaces) are abstracted out from job-queue.h and
memory-protector.h, but Phabricator recognizes it as “renamed and
modified from protector-policy.h”.

— Similar things happens to “hphp/facebook/runtime/health-monitor/external-clients-shim.h”
and “host-health-monitor.cpp”.

Reviewed By: @bertmaher

Differential Revision: D1458015
hhvm-bot pushed a commit that referenced this issue Dec 19, 2016
Summary:
The phpret vasm instruction can be specialized to use the load-pair instruction
when conditions are right.  The architecture manual says that both reads are done
separately so there should be no alignment issues.  It also states that the write back
is done after the reads.

Before
====
            655  B9: [profCount=1] (preds B8 B7)
            656     (23) RetCtrl<2> t1:StkPtr, t0:FramePtr, 1
            657         Main:
            658               0x3ca009b8  910083bc              add x28, x29, #0x20 (32)
            659               0x3ca009bc  aa1403e0              mov x0, x20
            660               0x3ca009c0  aa1303e1              mov x1, x19
            661               0x3ca009c4  f94007be              ldr x30, [x29, #8]  // <<----
            662               0x3ca009c8  f94003bd              ldr x29, [x29]        // <<----
            663               0x3ca009cc  d65f03c0              ret

After
===
   B9: [profCount=1] (preds B8 B7)
      (23) RetCtrl<2> t
Closes #7540

Differential Revision: D4313730

Pulled By: mxw

fbshipit-source-id: 112751342a9f830f376360e5219582703811d121
hhvm-bot pushed a commit that referenced this issue Sep 24, 2017
Summary:
Reported by UBSAN:
```lang=bash
001+ hphp/runtime/debugger/debugger_client.cpp:2515:7: runtime error: load of value 190, which is not a valid value for type 'bool'
002+     #0 0xa04d7ed in HPHP::Eval::DebuggerClient::saveConfig() hphp/runtime/debugger/debugger_client.cpp:2515
003+     #1 0xa02b046 in HPHP::Eval::DebuggerClient::setDebuggerClientSmallStep(bool const&) hphp/runtime/debugger/debugger_client.h:352
004+     #2 0xa02b046 in HPHP::Eval::DebuggerClient::loadConfig() hphp/runtime/debugger/debugger_client.cpp:2391
005+     #3 0xa02995f in HPHP::Eval::DebuggerClient::init(HPHP::Eval::DebuggerClientOptions const&) hphp/runtime/debugger/debugger_client.cpp:750
006+     #4 0xa015d01 in HPHP::Eval::DebuggerClient::start(HPHP::Eval::DebuggerClientOptions const&) hphp/runtime/debugger/debugger_client.cpp:783
007+     #5 0xa015d01 in HPHP::Eval::DebuggerClient::Start(HPHP::Eval::DebuggerClientOptions const&) hphp/runtime/debugger/debugger_client.cpp:399
008+     #6 0x9fe833b in HPHP::Eval::Debugger::StartClient(HPHP::Eval::DebuggerClientOptions const&) hphp/runtime/debugger/debugger.cpp:56
009+     #7 0x884b41f in HPHP::execute_program_impl(int, char**) hphp/runtime/base/program-functions.cpp:1929
010+     #8 0x883d586 in HPHP::execute_program(int, char**) hphp/runtime/base/program-functions.cpp:1200
011+     #9 0x72782e in main hphp/hhvm/main.cpp:85
012+     #10 0x7ffbeed1c857 in __libc_start_main /home/engshare/third-party2/glibc/2.23/src/glibc-2.23/csu/../csu/libc-start.c:289
013+     #11 0x724e28 in _start /home/engshare/third-party2/glibc/2.23/src/glibc-2.23/csu/../sysdeps/x86_64/start.S:118
015+ SUMMARY: UndefinedBehaviorSanitizer: invalid-bool-load hphp/runtime/debugger/debugger_client.cpp:2515:7 in

```

Reviewed By: markw65

Differential Revision: D5898634

fbshipit-source-id: 102f90ead9766b381a4dc722800ad6e7fb8a76be
hhvm-bot pushed a commit that referenced this issue Feb 13, 2018
Summary:
This change improves the sequence generated when lowering the Vptr element of the lea
Vinstr by taking advantage of the sub immediate instruction.  This saves 1 instruction
per instance.

Before
=====
    (07) StLocRange<[0, 108)> t0:FramePtr, Uninit
        Main:
            0x52c0035c  d10043a0              sub x0, x29, #0x10 (16)
            0x52c00360  9280d9e1              movn x1, #0x6cf
            0x52c00364  8b0103a1              add x1, x29, x1
            0x52c00368  3900201f              strb wzr, [x0, #8]

After
====
    (07) StLocRange<[0, 108)> t0:FramePtr, Uninit
        Main:
            0x50400404  d10043a0              sub x0, x29, #0x10 (16)
            0x50400408  d11b43a1              sub x1, x29, #0x6d0 (1744)  //<<---
            0x5040040c  3900201f              strb wzr, [x0, #8]

This was seen approximately 300 times in hphp/test/quick/all_type_comparison_test.php
and around 1000 times in hphp/test/zend/good/ext/intl/tests/grapheme.php

The standard regression tests were run with six option sets.  No new failures were observed.
Closes #7929

Differential Revision: D5706712

Pulled By: mxw

fbshipit-source-id: 09dd2597677567a965e00992dfeb6d8121a1bc69
hhvm-bot pushed a commit that referenced this issue Feb 27, 2018
Summary:
Exposed by UBSAN:
```lang=bash
001+ third-party-buck/gcc-5-glibc-2.23/build/boost/5c6f7a9/include/boost/dll/shared_library_load_mode.hpp:227:19: runtime error: load of value 4278190079, which is not a valid value for type 'boost::dll::load_mode::type'
002+     #0 0x7fa35160cbe7 in boost::dll::load_mode::operator&=(boost::dll::load_mode::type&, boost::dll::load_mode::type) third-party-buck/gcc-5-glibc-2.23/build/boost/5c6f7a9/include/boost/dll/shared_library_load_mode.hpp:227
003+     #1 0x7fa35160b01c in boost::dll::detail::shared_library_impl::load(boost::filesystem::path, boost::dll::load_mode::type, boost::system::error_code&) third-party-buck/gcc-5-glibc-2.23/build/boost/5c6f7a9/include/boost/dll/detail/posix/shared_library_impl.hpp:102
004+     #2 0x7fa351609d4e in boost::dll::shared_library::load(boost::filesystem::path const&, boost::dll::load_mode::type) third-party-buck/gcc-5-glibc-2.23/build/boost/5c6f7a9/include/boost/dll/shared_library.hpp:247
005+     #3 0x7fa351607188 in HPHP::Skip::Skip() hphp/facebook/extensions/skip/ext_skip.cpp:444
006+     #4 0x7fa34e9163c3 in __cxx_global_var_init.6 hphp/facebook/extensions/skip/ext_skip.cpp:590
007+     #5 0x7fa34e916422 in _GLOBAL__sub_I_ext_skip.cpp hphp/facebook/extensions/skip/ext_skip.cpp
008+     #6 0x7fa3546ee80a in call_init.part.0 /home/engshare/third-party2/glibc/2.23/src/glibc-2.23/elf/dl-init.c:72
009+     #7 0x7fa3546ee92b in call_init /home/engshare/third-party2/glibc/2.23/src/glibc-2.23/elf/dl-init.c:30
010+     #8 0x7fa3546ee92b in _dl_init /home/engshare/third-party2/glibc/2.23/src/glibc-2.23/elf/dl-init.c:120
011+     #9 0x7fa3546dec59  (/usr/local/fbcode/gcc-5-glibc-2.23/lib/ld.so+0xc59)
012+
013+ SUMMARY: UndefinedBehaviorSanitizer: invalid-enum-load third-party-buck/gcc-5-glibc-2.23/build/boost/5c6f7a9/include/boost/dll/shared_library_load_mode.hpp:227:19
```
Check if value can fit into int64_t before casting, otherwise use min. I verified that tests fail if instead of min we return something else (say 0).

Reviewed By: markw65

Differential Revision: D7072139

fbshipit-source-id: 454ccc5cb1e465fd308678d796bbf60039ffac53
hhvm-bot pushed a commit that referenced this issue Jun 26, 2019
Summary:
```
Assertion failure: /tmp/hhvm-4.11-20190624-29824-15nb7r5/hhvm-4.11.0/hphp/compiler/analysis/analysis_result.cpp:53: HPHP::AnalysisResult::~AnalysisResult(): assertion `!m_finish' failed.

* thread #1, stop reason = signal SIGSTOP
  * frame #0: 0x00007fff6c6592c6 libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007fff6c70ebf1 libsystem_pthread.dylib`pthread_kill + 284
    frame #2: 0x00007fff6c576d8a libsystem_c.dylib`raise + 26
    frame #3: 0x00000000063f3a0f hhvm`HPHP::bt_handler(int, __siginfo*, void*) + 1655
    frame #4: 0x00007fff6c703b5d libsystem_platform.dylib`_sigtramp + 29
    frame #5: 0x00007fff6c6592c7 libsystem_kernel.dylib`__pthread_kill + 11
    frame #6: 0x00007fff6c70ebf1 libsystem_pthread.dylib`pthread_kill + 284
    frame #7: 0x00007fff6c5c36a6 libsystem_c.dylib`abort + 127
    frame #8: 0x0000000004df327f hhvm`HPHP::assert_fail(char const*, char const*, unsigned int, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 267
    frame #9: 0x0000000004dacd05 hhvm`HPHP::AnalysisResult::~AnalysisResult() + 189
    frame #10: 0x0000000004dae03e hhvm`HPHP::Compiler::emitAllHHBC(std::__1::shared_ptr<HPHP::AnalysisResult>&&) + 500
    frame #11: 0x0000000002c10b87 hhvm`HPHP::hhbcTarget(HPHP::CompilerOptions const&, std::__1::shared_ptr<HPHP::AnalysisResult>&&, HPHP::AsyncFileCacheSaver&) + 753
    frame #12: 0x0000000002c0fdb7 hhvm`HPHP::process(HPHP::CompilerOptions const&) + 2152
    frame #13: 0x0000000002c0ce34 hhvm`HPHP::compiler_main(int, char**) + 489
    frame #14: 0x00007fff6c51e3d5 libdyld.dylib`start + 1
```

Given our Linux clang builds are fine, likely libc++ vs libstdc++ issue.

markw65 found - in the standard:

> For (4), other is in a valid but unspecified state after the call

Reviewed By: markw65

Differential Revision: D15997122

fbshipit-source-id: c72950e1365f1141445794608f9e56f1c5d6ed89
hhvm-bot pushed a commit that referenced this issue Aug 8, 2019
Summary:
```
Assertion failure: /tmp/hhvm-4.11-20190624-29824-15nb7r5/hhvm-4.11.0/hphp/compiler/analysis/analysis_result.cpp:53: HPHP::AnalysisResult::~AnalysisResult(): assertion `!m_finish' failed.

* thread #1, stop reason = signal SIGSTOP
  * frame #0: 0x00007fff6c6592c6 libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007fff6c70ebf1 libsystem_pthread.dylib`pthread_kill + 284
    frame #2: 0x00007fff6c576d8a libsystem_c.dylib`raise + 26
    frame #3: 0x00000000063f3a0f hhvm`HPHP::bt_handler(int, __siginfo*, void*) + 1655
    frame #4: 0x00007fff6c703b5d libsystem_platform.dylib`_sigtramp + 29
    frame #5: 0x00007fff6c6592c7 libsystem_kernel.dylib`__pthread_kill + 11
    frame #6: 0x00007fff6c70ebf1 libsystem_pthread.dylib`pthread_kill + 284
    frame #7: 0x00007fff6c5c36a6 libsystem_c.dylib`abort + 127
    frame #8: 0x0000000004df327f hhvm`HPHP::assert_fail(char const*, char const*, unsigned int, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 267
    frame #9: 0x0000000004dacd05 hhvm`HPHP::AnalysisResult::~AnalysisResult() + 189
    frame #10: 0x0000000004dae03e hhvm`HPHP::Compiler::emitAllHHBC(std::__1::shared_ptr<HPHP::AnalysisResult>&&) + 500
    frame #11: 0x0000000002c10b87 hhvm`HPHP::hhbcTarget(HPHP::CompilerOptions const&, std::__1::shared_ptr<HPHP::AnalysisResult>&&, HPHP::AsyncFileCacheSaver&) + 753
    frame #12: 0x0000000002c0fdb7 hhvm`HPHP::process(HPHP::CompilerOptions const&) + 2152
    frame #13: 0x0000000002c0ce34 hhvm`HPHP::compiler_main(int, char**) + 489
    frame #14: 0x00007fff6c51e3d5 libdyld.dylib`start + 1
```

Given our Linux clang builds are fine, likely libc++ vs libstdc++ issue.

markw65 found - in the standard:

> For (4), other is in a valid but unspecified state after the call

Reviewed By: markw65

Differential Revision: D15997122

fbshipit-source-id: c72950e1365f1141445794608f9e56f1c5d6ed89
facebook-github-bot pushed a commit that referenced this issue May 1, 2020
Summary:
This is a redo of D20805919 with a fix for the crash, described in T65255134.

Crash stack:

```
#0  0x0000000006566dbe in HPHP::arrprov::tagFromPC () at hphp/runtime/base/array-provenance.cpp:343
#1  0x000000000670b2d5 in HPHP::arrprov::tagStaticArr (tag=..., ad=<optimized out>) at hphp/runtime/base/array-provenance.cpp:291
#2  HPHP::ArrayData::CreateDict (tag=..., tag=...) at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v422239b/hphp/runtime/base/array-data-inl.h:105
#3  HPHP::Array::CreateDict () at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v422239b/hphp/runtime/base/type-array.h:97
#4  HPHP::IniSettingMap::IniSettingMap (this=<optimized out>, this=<optimized out>) at hphp/runtime/base/ini-setting.cpp:517
#5  0x00000000068835bf in HPHP::IniSettingMap::IniSettingMap (this=0x7fff36f4d8f0) at hphp/runtime/base/config.cpp:370
#6  HPHP::Config::Iterate(std::function<void (HPHP::IniSettingMap const&, HPHP::Hdf const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>, HPHP::IniSettingMap const&, HPHP::Hdf const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) (cb=..., ini=..., config=..., name=..., prepend_hhvm=prepend_hhvm@entry=true) at hphp/runtime/base/config.cpp:370
#7  0x0000000006927373 in HPHP::RuntimeOption::Load (ini=..., config=..., iniClis=..., hdfClis=..., messages=messages@entry=0x7fff36f4e170, cmd=...)
    at third-party-buck/platform007/build/libgcc/include/c++/trunk/bits/std_function.h:106
#8  0x000000000695aaee in HPHP::execute_program_impl (argc=argc@entry=21, argv=argv@entry=0x7fff36f4fac8) at hphp/runtime/base/program-functions.cpp:1716
```

The immediate cause of the crash is trying to access vmfp() inside tagFromPC before VM was initialized, resulting in null pointer dereference.
However, provenance tagging should have actually used runtime tag override, set in RuntimeOption::Load instead of trying to compute tag off PC.
The problem is: TagOverride short-circuits itself if Eval.ArrayProvenance is disabled.

So we run into the following sequence of events:
1. we start with EvalArrayProvenance=false - default value
2. TagOverride short-circuits and doesn't actually update the override
3. we parse config options and set EvalArrayProvenance=true
4. we try to create dict, decide that it needs provenance and try to compute a tag. Since there is no override set we fall back to tagFromPC and crash

To fix this I made TagOverride not short-circuit for this 1 specific call site.

The specific nature of a bug also explains, why it didn't get caught by any of prior testing:
- hphp tests could not catch it, because they run with minimal config, which doesn't exercise Config::Iterate
- I didn't specifically test sandbox with array provenance enabled, which would catch it
- This wouldn't be caught in servicelab, since we enable provenance in SL by changing default values. Also, I didn't run SL with provenance enabled for this.

What would catch this is starting either sandbox or prod-like web server with actual config.hdf or config.hdf.devrs and array provenance enabled via config or commandline options.

Differential Revision: D20974470

fbshipit-source-id: 6474fe4e4cf808c4e8572539119cd57374658877
facebook-github-bot pushed a commit that referenced this issue Jul 14, 2020
Summary:
I was trying to run servicelab with provenance enabled and had run into HHBC crash. Stack trace:

```
#0  0x0000000005e2b553 in HPHP::arrprov::tagFromPC () at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v5c9e8e3/hphp/runtime/base/rds-header.h:127
#1  0x0000000005e2f059 in HPHP::arrprov::tagStaticArr (ad=<optimized out>, tag=...) at hphp/runtime/base/array-provenance.cpp:297
#2  0x00000000007b8b5b in HPHP::ArrayData::CreateDArray (tag=..., tag=...) at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v5c9e8e3/hphp/runtime/base/array-data-inl.h:60
#3  HPHP::Array::CreateDArray () at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v5c9e8e3/hphp/runtime/base/type-array.h:109
#4  HPHP::preg_match_impl (pattern=pattern@entry=0x7f408a805860, subject=subject@entry=0x7f408a805840, subpats=subpats@entry=0x0, flags=flags@entry=0, start_offset=start_offset@entry=0, global=false) at hphp/runtime/base/preg.cpp:1146
#5  0x00000000060bdf89 in HPHP::preg_match (offset=0, flags=0, matches=0x0, subject=<optimized out>, pattern=0x7f408a805860) at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v5c9e8e3/hphp/runtime/base/memory-manager-inl.h:114
#6  HPHP::preg_match (flags=0, offset=0, matches=0x0, subject=..., pattern=...) at hphp/runtime/base/preg.cpp:1382
#7  HPHP::Config::matchHdfPattern (value=..., ini=..., hdfPattern=..., name=..., suffix=...) at hphp/runtime/base/config.cpp:395
#8  0x000000000885be96 in HPHP::(anonymous namespace)::applyBuildOverrides (ini=..., config=...) at hphp/util/hdf.cpp:90
#9  0x000000000886f000 in HPHP::prepareOptions (argv=<optimized out>, argc=51, po=...) at hphp/compiler/compiler.cpp:450
#10 HPHP::compiler_main (argc=51, argv=<optimized out>) at hphp/compiler/compiler.cpp:157
#11 0x00007f4120b001a6 in __libc_start_main (main=0x1cf2080 <main(int, char**)>, argc=52, argv=0x7ffda8b93048, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffda8b93038) at ../csu/libc-start.c:308
#12 0x000000000b31cb5a in _start () at ../sysdeps/x86_64/start.S:120
```

Differential Revision: D22519863

fbshipit-source-id: 2ce590181d7f7259e490ce3206c08e76f4eaaa4b
facebook-github-bot pushed a commit that referenced this issue May 11, 2021
Summary:
This diff addresses Monitor_connection_failure. It's all related to the question of how we behave when we have a huge number of incoming requests. The end result is (1) no more Monitor_connection_failure ever, (2) hh will wait indefinitely, or up to --timeout, for a connection to the server.
```
[client] * N -> [monitor] -> [server]
```
1. The server has a finite incoming monitor->server pipe, and it runs an infinite loop: (1) pick the next workitem of the incoming pipe, (2) that frees up a slot in the pipe so the monitor will be able to write the next item into it, (3) do a typecheck if needed, (4) handle the workitem
2. The monitor has a socket with a finite incoming queue, and runs an infinite loop: (1) pick the next workitem off the incoming queue, (2) that frees up a slot in the queue so the next client will be able to write into it, (3) attempt to write the workitem on the monitor->server pipe, but if that queue has remained full for 4s then communicate back to the workitem's client that it was busy.
3. Each client invocation attempts to put a workitem on the monitor's queue. It has a timeout of 1s. If the monitor's queue was full, it fails fatally with Monitor_connection_failed. If the monitor failed to handoff then it fails with Server_hung_up_should_retry and retries exponentially.

## Callers of hh_client should decide timeouts.

There's a "hh_client --timeout" parameter (by default infinite). The right design is that if the server is too busy, we should wait until it's available, up to that timeout. It's not right for us to unilaterally fatal when the server is busy. That's what this diff achieves. This means we no longer have a finite queue of hh_client requests (manifesting as Monitor_connection_failure roughly when it gets too big); it's now an infinite queue, embodied in the unix process list, as a set of processes all waiting for their chance.

> Note: HackAst generally doesn't specify a timeout. (1) If arc lint kicks off too many requests, that used to manifest as some of them getting failures but hh_server becomes available after 30-60s as they all fail; now it'll just go in a big queue, and the user will see in the spinner that hh_server is busy with a linter. (2) If an flib test kicks off too many requests, that used to manifest as some of them getting Monitor_connection_failure failures; now it'll manifest as the whole flib test timing out. (3) If a codemod kicks off too many requests, that used to manifest as the codemod failing with Monitor_connection_failure; now it'll manifest as the codemod taking longer but succeeding.

## It's all about backpressure.

Imagine that the system is full to capacity with short requests. Backpressure from the server is indicated by it only removing items from the monitor->server pipe as the server handles requests. This backpressure goes to the monitor, which can only handoff to the server once per server-handling, and hence can only pull items off its own incoming queue at that time too. This backpressure goes to the clients, which will timeout if they can't put an item on the monitor's queue. To make backpressure flow correctly, this diffstack makes the following changes:
1. The monitor will wait up to 30s for the server to accept a workitem. All normal server requests will be handled in this time. The ones that won't are slow find-all-refs and slow typechecks. (The previous 4s meant that even normal requests got the user-facing message "server isn't responding; reconnecting...")
2. The client will have a timeout up to "hh_client --timeout" in its attempts to put work onto the monitor's incoming queue. If the user didn't specify --timeout (i.e. infinite) then I put a client timeout of 60s plus automatic immediate retries. (The previous Monitor_connection_failure fatal error was the whole problem).

## We never want the server to go idle.

The best way for the monitor to wait is *as part of* its attempt to handoff to the server. This way, as soon as the server is ready, it will have another workitem and will never go idle. Likewise, the best way for a client to wait is *as part of* its attempt to establish a connection to the monitor and put a workitem on the monitor's incoming queue. That way, as soon as the monitor is ready, it will have another workitem and will never go idle. We should *not* be waiting in `Unix.sleep`; that's always wrong.  Effectively, our clients will park themselves in the unix process list, and the kernel will know that they're waiting for a slot to become available in the monitor's incoming queue, and will wake them up when it becomes available. This is a better than the current solution of cancelling and retrying ever 1s, a kind of "busy spin loop" which pointlessly burns CPU.

## Why is the monitor-to-server handoff timeout 30s?

For the handoff timeout, why did I put 30s rather than infinite?

There is only one scenario where we will exceed the 30s timeout. That's when both (1) the server's queue is full because there's been a high rate of incoming requests, (2) the server's current work, either for a file-change-triggered typecheck or a costly client request, takes longer than 30s.

*This scenario is a misuse of hack*. It simply doesn't make sense to have a high volume of costly requests. The computer will never be able to keep up. It's fine to have a high volume of cheap requests e.g. from typical Aurora linters. It's fine to have a low volume of expensive requests e.g. from a file change, or --find-refs. But the combination should be warned about. When the handoff times out, the monitor will send Monitor_failed_to_handoff to the client, which will display to the user "Hack server is too busy", and fail with exit code Server_hung_up_should_retry, and find_hh.sh will display "Reconnecting..." and retry with exponential backoff. That's not the mechanism I'd have chosen, but it's at least a reasonable mechanism to alert to the user that something's seriously wrong but still without failing.

If I'd used an infinite timeout then the user would never see "Hack is too busy. Reconnecting...". Also I have a general distrust of infinite timeouts. I worry that there will be cases where something got stuck, and I'd like for the user to at least become unstuck after a minute. If I'd used a smaller timeout like the current 4s, then the user will see this "Reconnecting..." message even in reasonable high-request-rate scenarios like --type-at-pos-batch. 30s feels like a good cutoff. It will display a message in the CLI if hack is being misused, as I described.

## Why is the client timeout 60s?

In the (default) case where the user didn't specify a --timeout parameter and we therefore use infinite timeout, I again got cold feet at putting an infinite timeout. I instead set it at 60s plus immediate retry if that times out. This is a bit like a busy spin loop, but it's not terribly busy. I got cold feet because there might be unknown other causes of failure and I didn't want them to just hang indefinitely.

Let's examine what the client code actually does for the "60s/retry" case:
1. Server_died. This happens from ServerMonitor sending Prehandoff.Server_died or Prehandoff.Server_died_config_changed. (I think there are bugs here; if the server died then the current hh_client should exit with code 6 "server_hung_up_should_retry" so that find_hh.sh will launch a new version of the hh_client binary. But I'll leave that for now). In this case hh_client retries
2. Timeout for the monitor to accept the request. This means there is backpressure from the monitor telling clients to slow down. This is the case where hh_client used to fatally exit with Monitor_connection_failure exit code 9. Now, with this diff, it hits the 60s timeout then immediately retries.
3. Timeout AFTER the monitor has accepted the request but before it's handed off. In this case hh_client has always retried
4. Timeout while the monitor attempts to handoff. This is the "30s monitor handoff timeout" discussed above. In this case hh_client exits with exit code 6, and find_hh.sh retries with exponential backoff, and the user sees "Server has hung up. Reconnecting..."
5. We don't even measure timeout while waiting for the server to process the request.

In the past (see comments in previous diff), if case 2 just does an immediate retry, then the test plan gets into a problem state where the monitor is just stuck trying to read client version. Previously it had been described as "wedged". I think that was caused by clients having a low timeout of just 1s, smaller indeed than the monitor->server timeout. But since I changed the client timeout to 60s that no longer happens.

*30s < 60s to avoid livelock pathology.*

The client's timeout must be comfortably longer than the monitor's timeout. Imagine if it were reversed, and the client's timeout was 30s while the monitor's was 60s. Therefore ever workitem on the monitor's queue would be stale by the time the monitor picks it up, and it won't get anything to send to the server, and after it burns through its queue rejecting every one (since each client is no longer around) then it'll have to just wait until a client retries. That would be a self-inflicted busy loop.

Let's spell it out. It had been observed in June 2017 that if there were 1000 parallel clients then the monitor became "wedged". It didn't define precisely what was observed, but D5205759 (b57d70b) explained a hypothesis:
1. Imagine the monitor's incoming socket queue is size 1000, and there are 1000 parallel clients.
2. Imagine that the first 999 of these clients has already given up (due to timeout), and so as the monitor works through them, for each one when it attempts to send Connection_ok back to that client, it will get EPIPE. Imagine it takes the monitor 1ms for this.
3. Clients in general have a 1s timeout which encompasses both waiting until the monitor enqueues it, and waiting until the monitor responds to it. If this deadline expires then they wait 0.1s and retry.
4. Imagine our client was number 1000, but because it took 999ms already to get EPIPE back from the earlier items, then by the time the monitor is ready to deal with us, then we've probably expired our 1s time limit, so the monitor will cancel us too.
5. In this time 999 other clients have again added themselves to the queue. We too will add ourselves to the queue. And the cycle will repeat again.

I experimented prior to this diffstack but was unable to reproduce this behavior. What I instead observed was that the client was in an ongoing loop of trying to connect to the monitor but failing because the monitor's incoming queue was full, and the monitor had failed to pick up anything from its queue for over five minutes. Here's what the monitor log and pstack looked like:
```
[2021-05-04 15:31:28.825] [t#O8Dg1HTCkS] read_version: got version {"client_version":"","tracker_id":"t#O8Dg1HTCkS"}, started at [2021-05-04 15:31:28.825]
[2021-05-04 15:31:28.825] [t#O8Dg1HTCkS] sending Connection_ok...
[2021-05-04 15:31:28.825] SIGPIPE(-8)
[2021-05-04 15:31:28.825] Ack_and_handoff failure; closing client FD: Unix.Unix_error(Unix.EPIPE, "write", "")
[2021-05-04 15:31:28.825] [t#oKIHi8xMAS] read_version: got version {"client_version":"","tracker_id":"t#oKIHi8xMAS"}, started at [2021-05-04 15:31:28.825]
[2021-05-04 15:31:28.825] [t#oKIHi8xMAS] sending Connection_ok...
[2021-05-04 15:31:28.825] SIGPIPE(-8)
[2021-05-04 15:31:28.825] Ack_and_handoff failure; closing client FD: Unix.Unix_error(Unix.EPIPE, "write", "")

#1  0x00000000025d0b31 in unix_select (readfds=<optimized out>, writefds=<optimized out>, exceptfds=<optimized out>, timeout=113136840) at select.c:94
#2  0x00000000022ef1f4 in camlSys_utils__select_non_intr_8811 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/utils/sys/sys_utils.ml:592
#3  0x00000000022a2666 in camlMarshal_tools__read_217 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/utils/marshal_tools/marshal_tools.ml:136
#4  0x00000000022a302b in camlMarshal_tools__from_fd_with_preamble_383 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/utils/marshal_tools/marshal_tools.ml:263
#5  0x0000000001f90116 in camlServerMonitor__read_version_2818 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/monitor/serverMonitor.ml:315
#6  0x0000000001f9106a in camlServerMonitor__ack_and_handoff_client_4701 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/monitor/serverMonitor.ml:548
#7  0x0000000001f92509 in camlServerMonitor__check_and_run_loop__4776 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/monitor/serverMonitor.ml:842
#8  0x0000000001f920cc in camlServerMonitor__check_and_run_loop_inner_5965 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/monitor/serverMonitor.ml:779
```
The client log is monstrously huge and full, for the final five minutes, of repeated (and failed-due-to-timeout) attempts to open the socket to the monitor:
```
[2021-05-04 15:53:00.223] [Check#IRhKGkxQn7] [t#QpQxPnW9wt] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.223] [Check#0NzdVomX9R] [t#6QT1JEbgJK] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.223] [Check#I9hYXmPzb0] [t#o9PRU7m1lM] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#CLtBMgCEVG] [t#WwlyQwUDhn] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.224] [Check#CLtBMgCEVG] [t#WwlyQwUDhn] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.224] [Check#zXvag0PQCK] [t#2NOmFISCOc] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.224] [Check#YoJPxiGVjK] [t#CICotuALSY] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.224] [Check#jCTjq2AcPp] [t#HvO1E3Rv10] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.224] [Check#VShC3XccG0] [t#LLbhdQhcch] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#YoJPxiGVjK] [t#CICotuALSY] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.224] [Check#MhnHnQmHIj] [t#u3gm7WR4B3] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#jCTjq2AcPp] [t#HvO1E3Rv10] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.224] [Check#U72xNuSb7N] [t#S47cx27Lta] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#zXvag0PQCK] [t#2NOmFISCOc] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.224] [Check#eGHledjDNT] [t#bQPqB2iG6c] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#7xj4rdPkhA] [t#NKGlmJYKAC] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.225] [Check#7xj4rdPkhA] [t#NKGlmJYKAC] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.225] [Check#kxQ7dr1Ve2] [t#JKFk5Pl5DB] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.225] [Check#kxQ7dr1Ve2] [t#JKFk5Pl5DB] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.225] [Check#s5HbtmgdVN] [t#ywpyouI5po] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.225] [Check#5192H11uaP] [t#ku2JO0LNvX] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.225] [Check#yOV3DR3w9] [t#r1irfsdLn5] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
```

Differential Revision: D28205345

fbshipit-source-id: a0f011ff2bcb379c3186ae5e319c4d1e9912c988
facebook-github-bot pushed a commit that referenced this issue May 13, 2021
Summary:
The original/'fixed' version of this regex put the `\d+` inside a character class, so the `+` never worked correctly for double-digit counts, resulting in errors like this.

```
> hphp/test/run --retranslate-all 5

Fatal error: Uncaught exception 'HH\InvariantException' with message 'unexpected output from shell_exec in runif_test_for_feature: 'Notice: File could not be loaded: 0'' in /data/users/wkl/fbsource/fbcode/hphp/test/run.php:2133
Stack trace:
#0 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2133): HH\invariant_violation()
#1 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2189): runif_test_for_feature()
#2 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2326): runif_function_matches()
#3 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2965): runif_should_skip_test()
#4 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2935): run_test()
#5 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(1988): run_and_log_test()
#6 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(3770): child_main()
#7 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(3897): main()
#8 (): run_main()
#9 {main}
```

how_to_regex_garabatokid

Reviewed By: ottoni, mofarrell

Differential Revision: D28401997

fbshipit-source-id: 54dff6a21612cdeea53ddc31e99f4e41fd8205c9
facebook-github-bot pushed a commit that referenced this issue Oct 7, 2022
Summary:
We have seen deadlock running `terminationHandler` -> `hasSubscribers` in 2 threads.
It's unclear which other thread is holding the lock.

To make things easier to debug next time, let's change terminationHandler (and
also main.cpp) to bypass the logging lock and write to stderr directly.

Related threads (all threads in P536343453):

  Thread 11 (LWP 3275661):
  #0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
  #1  0x0000000001cc995b in folly::detail::(anonymous namespace)::nativeFutexWaitImpl (addr=<optimized out>, expected=<optimized out>, absSystemTime=<optimized out>, absSteadyTime=<optimized out>, waitMask=<optimized out>) at fbcode/folly/detail/Futex.cpp:126
  #2  folly::detail::futexWaitImpl (futex=0x89, futex@entry=0x7f1c3ac2ef90, expected=994748889, absSystemTime=absSystemTime@entry=0x0, absSteadyTime=<optimized out>, absSteadyTime@entry=0x0, waitMask=waitMask@entry=1) at fbcode/folly/detail/Futex.cpp:254
  #3  0x0000000001d34bce in folly::detail::futexWait<std::atomic<unsigned int> > (futex=0x7f1c3ac2ef90, expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/detail/__futex__/headers/folly/detail/Futex-inl.h:96
  #4  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever::doWait (this=<optimized out>, futex=..., expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:718
  #5  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::futexWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1184
  #6  0x0000000001cd42b2 in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::yieldWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1151
  #7  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::waitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1109
  #8  0x0000000001e7e14c in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1664
  #9  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1356
  #10 folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lock_shared (this=0x7f1c3ac2ef90) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:495
  #11 std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >::shared_lock (this=<optimized out>, __m=...) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/shared_mutex:727
  #12 0x0000000002d765fd in folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::doLock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>, std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0>, 0> (mutex=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1493
  #13 folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::LockedPtr (this=0x7f1c149f8928, parent=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1272
  #14 folly::SynchronizedBase<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, (folly::detail::SynchronizedMutexLevel)2>::rlock (this=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:229
  #15 watchman::Publisher::hasSubscribers (this=<optimized out>) at fbcode/watchman/PubSub.cpp:117
  #16 0x0000000002eca798 in watchman::Log::log<char const (&) [39], char const*, char const (&) [3]> (this=<optimized out>, level=level@entry=watchman::ABORT, args=..., args=..., args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:42
  #17 0x0000000002ec9ba7 in watchman::log<char const (&) [39], char const*, char const (&) [3]> (level=watchman::ABORT, args=..., args=..., args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:121
  #18 (anonymous namespace)::terminationHandler () at fbcode/watchman/SignalHandler.cpp:159
  #19 0x00007f1c3b0c7b3a in __cxxabiv1::__terminate (handler=<optimized out>) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
  #20 0x00007f1c3b0c7ba5 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:58
  #21 0x0000000001c38c8b in __clang_call_terminate ()
  #22 0x0000000003284c9e in folly::detail::terminate_with_<std::runtime_error, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&> (args=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/lang/__exception__/headers/folly/lang/Exception.h:93
  #23 0x0000000003281bae in folly::terminate_with<std::runtime_error, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&> (args=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/lang/__exception__/headers/folly/lang/Exception.h:123
  #24 folly::SingletonVault::fireShutdownTimer (this=<optimized out>) at fbcode/folly/Singleton.cpp:499
  #25 0x0000000003281ad9 in folly::(anonymous namespace)::fireShutdownSignalHelper (sigval=...) at fbcode/folly/Singleton.cpp:454
  #26 0x00007f1c3b42b939 in timer_sigev_thread (arg=<optimized out>) at ../sysdeps/unix/sysv/linux/timer_routines.c:55
  #27 0x00007f1c3b41fc0f in start_thread (arg=<optimized out>) at pthread_create.c:434
  #28 0x00007f1c3b4b21dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

  ...

  Thread 1 (LWP 3201992):
  #0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
  #1  0x0000000001cc995b in folly::detail::(anonymous namespace)::nativeFutexWaitImpl (addr=<optimized out>, expected=<optimized out>, absSystemTime=<optimized out>, absSteadyTime=<optimized out>, waitMask=<optimized out>) at fbcode/folly/detail/Futex.cpp:126
  #2  folly::detail::futexWaitImpl (futex=0x89, futex@entry=0x7f1c3ac2ef90, expected=994748889, absSystemTime=absSystemTime@entry=0x0, absSteadyTime=<optimized out>, absSteadyTime@entry=0x0, waitMask=waitMask@entry=1) at fbcode/folly/detail/Futex.cpp:254
  #3  0x0000000001d34bce in folly::detail::futexWait<std::atomic<unsigned int> > (futex=0x7f1c3ac2ef90, expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/detail/__futex__/headers/folly/detail/Futex-inl.h:96
  #4  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever::doWait (this=<optimized out>, futex=..., expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:718
  #5  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::futexWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1184
  #6  0x0000000001cd42b2 in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::yieldWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1151
  #7  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::waitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1109
  #8  0x0000000001e7e14c in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1664
  #9  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1356
  #10 folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lock_shared (this=0x7f1c3ac2ef90) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:495
  #11 std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >::shared_lock (this=<optimized out>, __m=...) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/shared_mutex:727
  #12 0x0000000002d765fd in folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::doLock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>, std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0>, 0> (mutex=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1493
  #13 folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::LockedPtr (this=0x7ffd2d5be968, parent=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1272
  #14 folly::SynchronizedBase<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, (folly::detail::SynchronizedMutexLevel)2>::rlock (this=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:229
  #15 watchman::Publisher::hasSubscribers (this=<optimized out>) at fbcode/watchman/PubSub.cpp:117
  #16 0x0000000002ecac20 in watchman::Log::log<char const (&) [59]> (this=<optimized out>, level=level@entry=watchman::ABORT, args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:42
  #17 0x0000000002ec9b24 in watchman::log<char const (&) [59]> (level=watchman::ABORT, args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:121
  #18 (anonymous namespace)::terminationHandler () at fbcode/watchman/SignalHandler.cpp:165
  #19 0x00007f1c3b0c7b3a in __cxxabiv1::__terminate (handler=<optimized out>) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
  #20 0x00007f1c3b0c7ba5 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:58
  #21 0x0000000002d8cde1 in std::thread::~thread (this=0x7f1c3ac2ef90) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:152
  #22 0x00007f1c3b3cc8f8 in __run_exit_handlers (status=1, listp=0x7f1c3b598658 <__exit_funcs>, run_list_atexit=<optimized out>, run_dtors=<optimized out>) at exit.c:113
  #23 0x00007f1c3b3cca0a in __GI_exit (status=<optimized out>) at exit.c:143
  #24 0x00007f1c3b3b165e in __libc_start_call_main (main=0x2d11220 <main(int, char**)>, argc=2, argv=0x7ffd2d5bec78) at ../sysdeps/nptl/libc_start_call_main.h:74
  #25 0x00007f1c3b3b1718 in __libc_start_main_impl (main=0x2d11220 <main(int, char**)>, argc=2, argv=0x7ffd2d5bec78, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffd2d5bec68) at ../csu/libc-start.c:409
  #26 0x0000000002d0e181 in _start () at ../sysdeps/x86_64/start.S:116

Reviewed By: xavierd

Differential Revision: D40166374

fbshipit-source-id: 7017e20234e5e0a9532eb61a63ac49ac0020d443
facebook-github-bot pushed a commit that referenced this issue Mar 8, 2023
Summary:
AsyncUDPSocket test cases are crashing when running under llvm15. During
debugging it seems that the issue is the fact that the code tries to allocated 0
size array. Changing the code to prevent such allocation.

This is not very clean why to fix, but I am not sure if there is better one.
Please let me know if you have any suggestions.

Sample crash:
```
$ buck test //folly/io/async/test:async_udp_socket_test
...
stdout:
Note: Google Test filter = AsyncUDPSocketTest.TestDetachAttach
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from AsyncUDPSocketTest
[ RUN      ] AsyncUDPSocketTest.TestDetachAttach

stderr:
fbcode/folly/io/async/AsyncUDPSocket.cpp:699:10: runtime error: variable length array bound evaluates to non-positive value 0
    #0 0x7f4d8ed93704 in folly::AsyncUDPSocket::writev(folly::SocketAddress const&, iovec const*, unsigned long, int) fbcode/folly/io/async/AsyncUDPSocket.cpp:698
    #1 0x7f4d8ed9081f in folly::AsyncUDPSocket::writeGSO(folly::SocketAddress const&, std::unique_ptr<folly::IOBuf, std::default_delete<folly::IOBuf>> const&, int) fbcode/folly/io/async/AsyncUDPSocket.cpp:528
    #2 0x7f4d8ed900b2 in folly::AsyncUDPSocket::write(folly::SocketAddress const&, std::unique_ptr<folly::IOBuf, std::default_delete<folly::IOBuf>> const&) fbcode/folly/io/async/AsyncUDPSocket.cpp:660
    #3 0x350a05 in AsyncUDPSocketTest_TestDetachAttach_Test::TestBody() fbcode/folly/io/async/test/AsyncUDPSocketTest.cpp:914
    #4 0x7f4d90dd1ad5 in testing::Test::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2682:50
    #5 0x7f4d90dd1ad5 in testing::Test::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2672:6
    #6 0x7f4d90dd1c64 in testing::TestInfo::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2861:14
    #7 0x7f4d90dd1c64 in testing::TestInfo::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2833:6
    #8 0x7f4d90dd2321 in testing::TestSuite::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:3015:31
    #9 0x7f4d90dd2321 in testing::TestSuite::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2993:6
    #10 0x7f4d90dd2b1e in testing::internal::UnitTestImpl::RunAllTests() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:5855:47
    #11 0x7f4d90dd1d87 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2665:29
    #12 0x7f4d90dd1d87 in testing::UnitTest::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:5438:55
    #13 0x7f4d90d5c990 in RUN_ALL_TESTS() fbcode/third-party-buck/platform010/build/googletest/include/gtest/gtest.h:2490
    #14 0x7f4d90d5c618 in main fbcode/common/gtest/LightMain.cpp:20
    #15 0x7f4d8ea2c656 in __libc_start_call_main /home/engshare/third-party2/glibc/2.34/src/glibc-2.34/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
    #16 0x7f4d8ea2c717 in __libc_start_main@GLIBC_2.2.5 /home/engshare/third-party2/glibc/2.34/src/glibc-2.34/csu/../csu/libc-start.c:409:3
    #17 0x33ea60 in _start /home/engshare/third-party2/glibc/2.34/src/glibc-2.34/csu/../sysdeps/x86_64/start.S:116
...
```

Reviewed By: russoue, dmm-fb

Differential Revision: D43858875

fbshipit-source-id: 93749bab17027b6dfc0dbc01b6c183e501a5494c
facebook-github-bot pushed a commit that referenced this issue Apr 27, 2024
Summary:
ThreadLocalDetailTest.MultiThreadedTest had a data race on std::vector because multiple threads were accessing and modifying the std::vector without a mutex. I update the code to use indexes such that the vector is not updated concurrently.

Previous run failure that this fixes:

```
stderr:
==================
WARNING: ThreadSanitizer: data race (pid=1249005)
  Write of size 8 at 0x7fff195936d0 by main thread:
    #0 std::vector<std::unique_ptr<folly::test::Barrier, std::default_delete<folly::test::Barrier>>, std::allocator<std::unique_ptr<folly::test::Barrier, std::default_delete<folly::test::Barrier>>>>::pop_back() fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:1228 (thread_local_detail_test+0x116c28)
    #1 folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody() fbcode/folly/detail/test/ThreadLocalDetailTest.cpp:121 (thread_local_detail_test+0x101aba)
    #2 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) fbsource/src/gtest.cc:2675 (libthird-party_googletest_1.14.0_gtest.so+0xacf3c)
    #3 testing::Test::Run() fbsource/src/gtest.cc:2692 (libthird-party_googletest_1.14.0_gtest.so+0xacb9c)
    #4 testing::TestInfo::Run() fbsource/src/gtest.cc:2841 (libthird-party_googletest_1.14.0_gtest.so+0xafa4a)
    #5 testing::TestSuite::Run() fbsource/src/gtest.cc:3020 (libthird-party_googletest_1.14.0_gtest.so+0xb4303)
    #6 testing::internal::UnitTestImpl::RunAllTests() fbsource/src/gtest.cc:5925 (libthird-party_googletest_1.14.0_gtest.so+0xd105e)
    #7 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) fbsource/src/gtest.cc:2675 (libthird-party_googletest_1.14.0_gtest.so+0xd08f1)
    #8 testing::UnitTest::Run() fbsource/src/gtest.cc:5489 (libthird-party_googletest_1.14.0_gtest.so+0xd037e)
    #9 RUN_ALL_TESTS() fbsource/gtest/gtest.h:2317 (libcommon_gtest_light_main.so+0x22a7)
    #10 main fbcode/common/gtest/LightMain.cpp:20 (libcommon_gtest_light_main.so+0x2131)

  Previous read of size 8 at 0x7fff195936d0 by thread T123:
    #0 std::vector<std::unique_ptr<folly::test::Barrier, std::default_delete<folly::test::Barrier>>, std::allocator<std::unique_ptr<folly::test::Barrier, std::default_delete<folly::test::Barrier>>>>::size() const fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:919 (thread_local_detail_test+0x11b0bd)
    #1 std::vector<std::unique_ptr<folly::test::Barrier, std::default_delete<folly::test::Barrier>>, std::allocator<std::unique_ptr<folly::test::Barrier, std::default_delete<folly::test::Barrier>>>>::operator[](unsigned long) fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:1045 (thread_local_detail_test+0x11d101)
    #2 folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0::operator()() const fbcode/folly/detail/test/ThreadLocalDetailTest.cpp:97 (thread_local_detail_test+0x11d08c)
    #3 void std::__invoke_impl<void, folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0>(std::__invoke_other, folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0&&) fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/invoke.h:61 (thread_local_detail_test+0x11cf95)
    #4 std::__invoke_result<folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0>::type std::__invoke<folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0>(folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0&&) fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/invoke.h:96 (thread_local_detail_test+0x11cf05)
    #5 void std::thread::_Invoker<std::tuple<folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0>>::_M_invoke<0ul>(std::_Index_tuple<0ul>) fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:253 (thread_local_detail_test+0x11cebd)
    #6 std::thread::_Invoker<std::tuple<folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0>>::operator()() fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:260 (thread_local_detail_test+0x11ce65)
    #7 std::thread::_State_impl<std::thread::_Invoker<std::tuple<folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody()::$_0>>>::_M_run() fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:211 (thread_local_detail_test+0x11cd79)
    #8 execute_native_thread_routine /home/engshare/third-party2/libgcc/11.x/src/gcc-11.x/x86_64-facebook-linux/libstdc++-v3/src/c++11/../../../.././libstdc++-v3/src/c++11/thread.cc:82:18 (libstdc++.so.6+0xdf4e4) (BuildId: 452d1cdae868baeeb2fdf1ab140f1c219bf50c6e)

  Location is stack of main thread.

  Location is global '??' at 0x7fff19575000 ([stack]+0x1e6d0)

  Thread T123 (tid=1249233, running) created by main thread at:
    #0 pthread_create <null> (thread_local_detail_test+0x156bef)
    #1 __gthread_create /home/engshare/third-party2/libgcc/11.x/src/gcc-11.x/x86_64-facebook-linux/libstdc++-v3/include/x86_64-facebook-linux/bits/gthr-default.h:663:35 (libstdc++.so.6+0xdf80e) (BuildId: 452d1cdae868baeeb2fdf1ab140f1c219bf50c6e)
    #2 std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State>>, void (*)()) /home/engshare/third-party2/libgcc/11.x/src/gcc-11.x/x86_64-facebook-linux/libstdc++-v3/src/c++11/../../../.././libstdc++-v3/src/c++11/thread.cc:147:37 (libstdc++.so.6+0xdf80e)
    #3 folly::threadlocal_detail::ThreadLocalDetailTest_MultiThreadedTest_Test::TestBody() fbcode/folly/detail/test/ThreadLocalDetailTest.cpp:93 (thread_local_detail_test+0x1015cd)
    #4 void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) fbsource/src/gtest.cc:2675 (libthird-party_googletest_1.14.0_gtest.so+0xacf3c)
    #5 testing::Test::Run() fbsource/src/gtest.cc:2692 (libthird-party_googletest_1.14.0_gtest.so+0xacb9c)
    #6 testing::TestInfo::Run() fbsource/src/gtest.cc:2841 (libthird-party_googletest_1.14.0_gtest.so+0xafa4a)
    #7 testing::TestSuite::Run() fbsource/src/gtest.cc:3020 (libthird-party_googletest_1.14.0_gtest.so+0xb4303)
    #8 testing::internal::UnitTestImpl::RunAllTests() fbsource/src/gtest.cc:5925 (libthird-party_googletest_1.14.0_gtest.so+0xd105e)
    #9 bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) fbsource/src/gtest.cc:2675 (libthird-party_googletest_1.14.0_gtest.so+0xd08f1)
    #10 testing::UnitTest::Run() fbsource/src/gtest.cc:5489 (libthird-party_googletest_1.14.0_gtest.so+0xd037e)
    #11 RUN_ALL_TESTS() fbsource/gtest/gtest.h:2317 (libcommon_gtest_light_main.so+0x22a7)
    #12 main fbcode/common/gtest/LightMain.cpp:20 (libcommon_gtest_light_main.so+0x2131)

ThreadSanitizer: data race fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:1228 in std::vector<std::unique_ptr<folly::test::Barrier, std::default_delete<folly::test::Barrier>>, std::allocator<std::unique_ptr<folly::test::Barrier, std::default_delete<folly::test::Barrier>>>>::pop_back()
==================
ThreadSanitizer: reported 1 warnings
```

Reviewed By: yfeldblum

Differential Revision: D56643303

fbshipit-source-id: cc364817ae79bc6418cc07faa35e1bcd21830c66
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants