Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ICU is too old, found . and we need 4.2 #3

Closed
snikch opened this issue Feb 20, 2010 · 7 comments
Closed

ICU is too old, found . and we need 4.2 #3

snikch opened this issue Feb 20, 2010 · 7 comments

Comments

@snikch
Copy link

snikch commented Feb 20, 2010

I'm getting this error when trying to build. I've checked and I have 4.2.1 installed in /usr/local/lib/icu/

Any ideas?

@sebastianbergmann
Copy link
Contributor

Did you export CMAKE_PREFIX_PATH=$PREFIX where $PREFIX is the directory to which ICU was installed?

@snikch
Copy link
Author

snikch commented Feb 20, 2010

I used make install so it shouldn't need that, all the other dependencies that I needed to build aren't coming up with errors.

@scottmac
Copy link
Contributor

Remove your CMakeCache.txt file thats in the main directory and re-run cmake .

I'll leave this open and change the code to stop it caching this variable in future when its too old.

@rauanmayemir
Copy link

If you doing it from ubuntu just dl latest icu deb's from lucid's branch.

@facebook
Copy link
Collaborator

Fixed this now so that it no longer caches the results when it finds an old version.

@ghost
Copy link

ghost commented Jul 20, 2010

Hi, I'm trying to build hiphop-php and got the same error "ICU is too old, found " and then the all uversion.h is pasted. I am using ICU 4.4.1 on Gentoo, so I checked the file uversion.h and it appears there is no U_ICU_VERSION_MAJOR_NUM or MINOR_NUM, those defines are in uvernum.h
Modifying FindICU.cmake works fine.

@scottmac
Copy link
Contributor

This was fixed about 45 minutes ago in 5fc3489

h4ck3rm1k3 pushed a commit to h4ck3rm1k3/hhvm that referenced this issue Sep 13, 2011
Summary:
declared those new functions and constants, then

 diff facebook#2 will have a naive implementation, copying entire global states and
input parameters, nullifying all fiber-unsafe extension objects.

 diff facebook#3 will experiment adding fiber Variant wrapper to do copy-on-writes

Test Plan:
nothing outstanding in this diff

DiffCamp Revision: 119354
Reviewed By: iproctor
CC: hphp-diffs@lists, iproctor
Tasks:

Revert Plan:
OK
h4ck3rm1k3 pushed a commit to h4ck3rm1k3/hhvm that referenced this issue Sep 13, 2011
Summary:
I actually had to fix how referenced Variants work. Also fixed some threading
problems. But anyways, everything seems to be working, even with global
variables, which are all treated the same way as referenced Variants in
parameters, except we allow different "strategies" for resolving conflicts. I
also changed the way how to specify strategy from end_user_func_async() so to
make the unmarshalling a lot more efficient without calling a user function for
every single global variable.

Need one more diff to finish off ExecutionContext and extension's ThreadLocals.

Also need a diff for HPHPi to work. I currently set Fiber.ThreadCount = 0 for
TestCodeRunEval::TestFiber.

Test Plan:
unit tests

DiffCamp Revision: 129781
Reviewed By: iproctor
CC: hphp-diffs@lists, iproctor
Tasks:

Revert Plan:
OK
h4ck3rm1k3 pushed a commit to h4ck3rm1k3/hhvm that referenced this issue Sep 13, 2011
Summary:
At shutdown time, by default HPHPi won't call object destructor. However we
want to raise a warning to alert the programmer that the destructor isn't
called so proper code change can happen.

Test Plan:
make fast_tests
[myang@dev1560] cat /tmp/t.php
<?php
class A { function __destruct() { cos(0); echo "~A\n"; } }
class B { function __destruct() { cos(0); echo "~B\n"; } }
class C { function __destruct() { cos(0); echo "~C\n"; } }
$a = new A; $b = new B; $c = new C;
var_dump($c);
[myang@dev1560] hphpi/hphpi /tmp/t.php
object(C)facebook#3 (0) {
}
HipHop Warning:  Object Destructor of class A is not called.
HipHop Warning:  Object Destructor of class B is not called.
HipHop Warning:  Object Destructor of class C is not called.

DiffCamp Revision: 204908
Reviewed By: qixin
Reviewers: qixin, mwilliams
CC: gpatangay, qixin, myang, hphp-diffs@lists
Revert Plan:
OK
hhvm-bot pushed a commit that referenced this issue Aug 10, 2018
Summary:
When the debugger client evaluates in the "hover" or "watch" contexts, if the expression cannot be evaluated, we should not be throwing PHP errors and printing a bunch of spew.

This diff fixes a few issues:
1. Do not print an exception to the console for eval hover errors
2. In Nuclide, don't show a data tip with "N/A" or an expression evaluation error when the user hovers over something that is undefined, a keyword, or not valid code.
3. The SilentEvaluationContext shouldn't be setting the ExecutionContext output buffer to nullptr - this is causing the error not to be suppressed, but instead written to the TLS request transport fd. If the debugger is in webserver attach mode, this causes the error to be written to the browser, which corrupts the webpage output!

Note re #3: As best I can tell, hphpd has the same bug due to null'ing the transport buffer during evalaution, but since the CLI doesn't have hover events, it's not noticable.

Reviewed By: velocityboy

Differential Revision: D9171487

fbshipit-source-id: c83ddde333cb43f40a9c56dd3bde0cf94dfdde3d
fredemmott added a commit to fredemmott/hhvm that referenced this issue Oct 9, 2018
Needed for stream_await to work in CLI server mode.

Test plan:

```
fredemmott-pro:test fredemmott$ hhvm ./run.php --cli-server -b $(which hhvm) slow/async/streams/file.php
Started 1 servers in 0.5 seconds

Running 1 tests in 1 threads (0 in serial)

FAILED: slow/async/streams/file.php
001+ Fatal error: Uncaught exception 'Exception' with message 'Unable to await on stream, invalid file descriptor' in /Users/fredemmott/code/hhvm/hphp/test/slow/async/streams/file.php:7
002+ Stack trace:
003+ #0 /Users/fredemmott/code/hhvm/hphp/test/slow/async/streams/file.php(7): stream_await()
001- Done\.\s*
002- Done\.\s*
003- Done\.\s*
004+ facebook#1 (): doStuff()
005+ facebook#2 (): Closure$__SystemLib\enter_async_entry_point()
006+ facebook#3 (): HH\Asio\join()
007+ facebook#4 (): __SystemLib\enter_async_entry_point()
008+ facebook#5 {main}
009+
010+ Fatal error: Uncaught exception 'Exception' with message 'Unable to await on stream, invalid file descriptor' in /Users/fredemmott/code/hhvm/hphp/test/slow/async/streams/file.php:7
011+ Stack trace:
012+ #0 /Users/fredemmott/code/hhvm/hphp/test/slow/async/streams/file.php(7): stream_await()
013+ facebook#1 (): doStuff()
014+ facebook#2 (): Closure$__SystemLib\enter_async_entry_point()
015+ facebook#3 (): HH\Asio\join()
016+ facebook#4 (): __SystemLib\enter_async_entry_point()
017+ facebook#5 {main}
018+
019+ Fatal error: Uncaught exception 'Exception' with message 'Unable to await on stream, invalid file descriptor' in /Users/fredemmott/code/hhvm/hphp/test/slow/async/streams/file.php:7
020+ Stack trace:
021+ #0 /Users/fredemmott/code/hhvm/hphp/test/slow/async/streams/file.php(7): stream_await()
022+ facebook#1 (): doStuff()
023+ facebook#2 (): Closure$__SystemLib\enter_async_entry_point()
024+ facebook#3 (): HH\Asio\join()
025+ facebook#4 (): __SystemLib\enter_async_entry_point()
026+ facebook#5 {main}

1 tests failed
(╯°□°)╯︵ ┻━┻

See the diffs:
cat slow/async/streams/file.php.diff

For xargs, list of failures is available using:
cat /tmp/test-failuresgXfOHo

Run these by hand:
../../../../../../usr/local/Cellar/hhvm/3.28.3/bin/hhvm -c /Users/fredemmott/code/hhvm/hphp/test/slow/config.ini -vEval.EnableArgsInBacktraces=true -vEval.EnableIntrinsicsExtension=true -vEval.HHIRInliningIgnoreHints=false -vRepo.Local.Mode=-- -vRepo.Central.Path=/usr/local/Cellar/hhvm/3.28.3/bin/verify.hhbc -vEval.Jit=true  -vEval.ProfileHWEnable=false -vEval.HackCompilerExtractPath=/usr/local/Cellar/hhvm/3.28.3/bin/hackc_%{schema} -vEval.EmbeddedDataExtractPath=/usr/local/Cellar/hhvm/3.28.3/bin/hhvm_%{type}_%{buildid}  -vResourceLimit.CoreFileSize=0    --file 'slow/async/streams/file.php'  -vEval.UseRemoteUnixServer=only -vEval.UnixServerPath=/var/folders/3l/2yk1tgkn7xdd76bs547d9j90fcbt87/T/hhvm-cli-fgVhjx --count=3

Re-run just the failing tests:
./run --cli-server -b /usr/local/bin/hhvm $(cat /tmp/test-failuresgXfOHo)

Total time for all executed tests as run: 1.37s
Total time for all executed tests if run serially: 1.29s
fredemmott-pro:test fredemmott$ hhvm ./run.php --cli-server -b ../hhvm/hhvm slow/async/streams/file.php
Started 1 servers in 2.0 seconds

Running 1 tests in 1 threads (0 in serial)

All tests passed.
              |    |    |
             )_)  )_)  )_)
            )___))___))___)\
           )____)____)_____)\
         _____|____|____|____\\__
---------\      SHIP IT      /---------
  ^^^^^ ^^^^^^^^^^^^^^^^^^^^^
    ^^^^      ^^^^     ^^^    ^^
         ^^^^      ^^^

Total time for all executed tests as run: 1.24s
Total time for all executed tests if run serially: 1.16s
```
hhvm-bot pushed a commit that referenced this issue Feb 8, 2019
Summary:
When we localize a function type with where constraints, we might localize type const access like T1::T. Whenever we localize type consts like T1::T, create a tvar v2 representing T1::T, and add it in the node for v1, representing T1:
```
v1 -> {
  upper_bound: [ StringBox; ... ]
  lower_bounds: ...
  appears_covariantly: ...
  appears_contravariantly: ...
  type_consts: varid SidMap.t = [
    "T" -> v2
  ]
}
```
Note: if we localize T1::T::T, then we would need v1 for T1, v2 for v1::T and v3 for v2::T.

For all present and future upper bounds U of #1, localize U::T as a type variable #3 with the same constraints as T in U, then make #2 equivalent to #3.

For all present and future lower bounds L of #1, simply make #2 equivalent to L::T.

For upper bounds, we're just checking that the constraints on U::T are satisfied. For lower bounds L however, we make #1::T equal to the type constant in L, possibly making it an expression dependent type if T is abstract in L.

Reviewed By: andrewjkennedy

Differential Revision: D13958402

fbshipit-source-id: 23b8d157fd78ae27d777904bbea53a166f6a939b
hhvm-bot pushed a commit that referenced this issue May 1, 2019
Summary:
This diff is all about using Typing_union.simplify_unions instead of get_union_elements.

Why does it make things better ?
Suppose you have type variables like:
```
vec<int> <: #1 <: #2
```
Since we transitively close, we'll get a constraint graph like:
```
vec<int> <: #1
vec<int>, #1 <: #2
```
get_union_elements was implemented in such a way that calling in on #2 would get vec<int> twice, resulting in the widened type being (vec<#3> | vec<#4). You can see how this causes a explosion of type variables for bigger, transitively closed, constraint graphs.

Reviewed By: andrewjkennedy

Differential Revision: D15149670

fbshipit-source-id: d74e3f0cc35de755025da087bda7606376206531
hhvm-bot pushed a commit that referenced this issue Jun 26, 2019
Summary:
```
Assertion failure: /tmp/hhvm-4.11-20190624-29824-15nb7r5/hhvm-4.11.0/hphp/compiler/analysis/analysis_result.cpp:53: HPHP::AnalysisResult::~AnalysisResult(): assertion `!m_finish' failed.

* thread #1, stop reason = signal SIGSTOP
  * frame #0: 0x00007fff6c6592c6 libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007fff6c70ebf1 libsystem_pthread.dylib`pthread_kill + 284
    frame #2: 0x00007fff6c576d8a libsystem_c.dylib`raise + 26
    frame #3: 0x00000000063f3a0f hhvm`HPHP::bt_handler(int, __siginfo*, void*) + 1655
    frame #4: 0x00007fff6c703b5d libsystem_platform.dylib`_sigtramp + 29
    frame #5: 0x00007fff6c6592c7 libsystem_kernel.dylib`__pthread_kill + 11
    frame #6: 0x00007fff6c70ebf1 libsystem_pthread.dylib`pthread_kill + 284
    frame #7: 0x00007fff6c5c36a6 libsystem_c.dylib`abort + 127
    frame #8: 0x0000000004df327f hhvm`HPHP::assert_fail(char const*, char const*, unsigned int, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 267
    frame #9: 0x0000000004dacd05 hhvm`HPHP::AnalysisResult::~AnalysisResult() + 189
    frame #10: 0x0000000004dae03e hhvm`HPHP::Compiler::emitAllHHBC(std::__1::shared_ptr<HPHP::AnalysisResult>&&) + 500
    frame #11: 0x0000000002c10b87 hhvm`HPHP::hhbcTarget(HPHP::CompilerOptions const&, std::__1::shared_ptr<HPHP::AnalysisResult>&&, HPHP::AsyncFileCacheSaver&) + 753
    frame #12: 0x0000000002c0fdb7 hhvm`HPHP::process(HPHP::CompilerOptions const&) + 2152
    frame #13: 0x0000000002c0ce34 hhvm`HPHP::compiler_main(int, char**) + 489
    frame #14: 0x00007fff6c51e3d5 libdyld.dylib`start + 1
```

Given our Linux clang builds are fine, likely libc++ vs libstdc++ issue.

markw65 found - in the standard:

> For (4), other is in a valid but unspecified state after the call

Reviewed By: markw65

Differential Revision: D15997122

fbshipit-source-id: c72950e1365f1141445794608f9e56f1c5d6ed89
hhvm-bot pushed a commit that referenced this issue Aug 8, 2019
Summary:
```
Assertion failure: /tmp/hhvm-4.11-20190624-29824-15nb7r5/hhvm-4.11.0/hphp/compiler/analysis/analysis_result.cpp:53: HPHP::AnalysisResult::~AnalysisResult(): assertion `!m_finish' failed.

* thread #1, stop reason = signal SIGSTOP
  * frame #0: 0x00007fff6c6592c6 libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007fff6c70ebf1 libsystem_pthread.dylib`pthread_kill + 284
    frame #2: 0x00007fff6c576d8a libsystem_c.dylib`raise + 26
    frame #3: 0x00000000063f3a0f hhvm`HPHP::bt_handler(int, __siginfo*, void*) + 1655
    frame #4: 0x00007fff6c703b5d libsystem_platform.dylib`_sigtramp + 29
    frame #5: 0x00007fff6c6592c7 libsystem_kernel.dylib`__pthread_kill + 11
    frame #6: 0x00007fff6c70ebf1 libsystem_pthread.dylib`pthread_kill + 284
    frame #7: 0x00007fff6c5c36a6 libsystem_c.dylib`abort + 127
    frame #8: 0x0000000004df327f hhvm`HPHP::assert_fail(char const*, char const*, unsigned int, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 267
    frame #9: 0x0000000004dacd05 hhvm`HPHP::AnalysisResult::~AnalysisResult() + 189
    frame #10: 0x0000000004dae03e hhvm`HPHP::Compiler::emitAllHHBC(std::__1::shared_ptr<HPHP::AnalysisResult>&&) + 500
    frame #11: 0x0000000002c10b87 hhvm`HPHP::hhbcTarget(HPHP::CompilerOptions const&, std::__1::shared_ptr<HPHP::AnalysisResult>&&, HPHP::AsyncFileCacheSaver&) + 753
    frame #12: 0x0000000002c0fdb7 hhvm`HPHP::process(HPHP::CompilerOptions const&) + 2152
    frame #13: 0x0000000002c0ce34 hhvm`HPHP::compiler_main(int, char**) + 489
    frame #14: 0x00007fff6c51e3d5 libdyld.dylib`start + 1
```

Given our Linux clang builds are fine, likely libc++ vs libstdc++ issue.

markw65 found - in the standard:

> For (4), other is in a valid but unspecified state after the call

Reviewed By: markw65

Differential Revision: D15997122

fbshipit-source-id: c72950e1365f1141445794608f9e56f1c5d6ed89
hhvm-bot pushed a commit that referenced this issue Aug 17, 2019
Summary: This diff moves the destructuring subtyping rules to be above the catch-all `Tvar _, _` rule and fixes the `ty_compare` function in the case of Tdestructure. Previously, it was saying that `list(#1, #2)` is equivalent to `list(#3, #4)`, so the lower bounds on the inner type variables were not being computed.

Reviewed By: kmeht

Differential Revision: D16846102

fbshipit-source-id: 051685e9991a63bc5dc09a3eba568a8027816ce7
facebook-github-bot pushed a commit that referenced this issue Nov 15, 2019
Summary:
In the smart constructor for unions Typing_union.make_union, if the result is a union which contains unsolved type variables, then we wrap it in a type variable.
Do something similar with `make_intersection`

When a type variable gets solved, unions containing this type variable may be solved in `Typing_type_simplifier` if all type variables from this union are solved.

This is because there are a lot of types like
```
( Vector<#1> |
  Vector<#2> |
  ...
  Vector<#n> )
```
If each time a type variable gets solved, we attempt a union simplification which is quadratic in n, we get bad perf.

Therefore, we only simplify a type when all type variables in it are solved.
To that end, we also maintain a map `tyvars_in_type` in the environment that is the dual of the other map `tyvar_occurrences` introduced in a previous diff. Furthermore, we now maintain the invariant that we only keep the occurrences links if the type variable that occurs is unsolved or itself contains unsolved type variables.

So for example, for
```
#1 -> (#2 | #3)
#2 -> (int | #4)
#4 -> string
#3 -> #5
#5 unsolved
```
We'd have recorded that #5 occurs in #3 and #3 in #1, but since #2 and #4 are solved and do not contain any unsolved type variables, we don't record their occurrences.

We can now simplify a type variable type only when their entry in `tyvars_in_type` is empty.

With the previous example, when #5 gets solved, if it gets solved to something that still contains unsolved type variable, we don't do any simplification, but if it gets solved to say int, to maintain the invariant described above, we'd remove it from the types in #3, and since #3 now wouldn't contain any more unsolved type variables we could simplify it, then recursively remove it from the types in #1 and finally simplify #1 itself.

We also remove now unnecessary usages of simplify_unions

Reviewed By: andrewjkennedy

Differential Revision: D18059114

fbshipit-source-id: 73eb8c40087d519246306382b9c2ab99ad1deb20
facebook-github-bot pushed a commit that referenced this issue Jan 20, 2020
Summary:
This diff changes splat (...) unpacking to use Tdestructure, and extends Tdestructure to have 3 main boxes for unpacking. There are two cases to explore:

# Tuple destructuring

Now, `Tdestructure` for splat (henceforth as "splat(...)") has three separate boxes for types. For example, the following example

```
function take(bool $b, float $f = 3.14, arraykey ...$aks): void {}
function f((bool, float, int, string) $tup): void {
  take(...$tup);
}
```
corresponds to the subtyping assertion
```
(bool, float, int, string) <: splat([#1], [opt#2], ...#3)
```
and results in
```
bool <: #1
float <: #2
arraykey <: #3
```

First, it is required that the tuple have enough elements to fill the required values -- in this case just `[#1]`. Then, if there are remaining elements, they are first used to fill the optional elements `[#2]`. Finally, the remainder are collected in the variadic element `#3`. If tuple elements remain but there is no variadic element, then we emit an error.

# Array destructuring

Previously, the typechecker would allow nonsense like
```
function f(int $i = 3): void {}
function g(int $i, int ...$j): void {}

function passes_typechecker(): void {
  f(...vec[1,2]); // throws
  f(...vec[]); // throws
}
```
Now, we require that splat operations with arrays have a variadic component to accept the values. We additionally ban unpacking into the required arguments of a function because the array could be empty. Unpacking into optionals is OK. Basically, we require that the function signature can accept the entire array.

**Note:** we still allow partial array destructuring with *list destructuring*, so
```
list($a, $b) = vec[1,2,3,4];
```
is still valid and is represented as
```
vec<int> <: list(#1, #2)
```

# Typing Rules

In these rules `r` is a type of a required field, `o` is a type of an optional field, `v` is the type of the variadic field.

#  destruct_tuple_list
```
forall i in 1...n

             t_i <: r_i
--------------------------------------
(t_1, ..., t_n) <: list(r_1, ..., r_n)
```
**Example**
```
function test_runner((int, string, bool) $tup): void {
  list($a, $b, $c) = $tup;
}
```

# destruct_tuple_splat
```
forall i in 1...m
forall j in (if p < n then m+1...p else m+1...n)
forall k in p+1...n

     m <= n    t_i <: r_i    t_j <: o_j    t_k <: v
-----------------------------------------------------------
(t_1, ..., t_n) <: splat(r_1, ..., r_m, o_m+1, ..., o_p, v)
```
**Example**
```
function take(int $i, string $j, bool $kopt = false, float $fopt = 3.14, mixed ...$rest): void {}

function test_runner((int, string) $tup): void { take(...$tup); }
function test_runner2((int, string, bool) $tup): void { take(...$tup); }
function test_runner3((int, string, bool, float) $tup): void { take(...$tup); }
function test_runner4((int, string, bool, float, null, int, mixed) $tup): void { take(...$tup); }
```

# destruct_array_list

This rule already exists, and it allows for operations that can throw.
```
      forall i in 1...n   t <: r_i
-------------------------------------------
        vec<t> <: list(r_1, ... r_n)
     Vector<t> <:
  ImmVector<t> <:
ConstVector<t> <:
```
**Example**
```
list($a, $b, $c) = vec[3]; // legal, but will throw
list($a) = vec[2,3,5]; // legal even though incomplete
```

# destructure_array_splat

Note the lack of required field in this row, and the presence of the variadic.

```
 forall i in 1..n   t <: o_i    t <: v
----------------------------------------
Traversable<t> <: splat(o_1, ... o_n, v)
```

**Example**
```
function take1(int $i = 3, int ...$is): void {}
function take2(int ...$is): void {}

function test(vec<int> $v): void {
   take1(...$v);
   take2(...$v);
}
```

Reviewed By: CatherineGasnier

Differential Revision: D18633271

fbshipit-source-id: 2b7a7beebf2ce30c2cb2765f75de2db6bdb3c24e
facebook-github-bot pushed a commit that referenced this issue May 1, 2020
Summary:
This is a redo of D20805919 with a fix for the crash, described in T65255134.

Crash stack:

```
#0  0x0000000006566dbe in HPHP::arrprov::tagFromPC () at hphp/runtime/base/array-provenance.cpp:343
#1  0x000000000670b2d5 in HPHP::arrprov::tagStaticArr (tag=..., ad=<optimized out>) at hphp/runtime/base/array-provenance.cpp:291
#2  HPHP::ArrayData::CreateDict (tag=..., tag=...) at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v422239b/hphp/runtime/base/array-data-inl.h:105
#3  HPHP::Array::CreateDict () at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v422239b/hphp/runtime/base/type-array.h:97
#4  HPHP::IniSettingMap::IniSettingMap (this=<optimized out>, this=<optimized out>) at hphp/runtime/base/ini-setting.cpp:517
#5  0x00000000068835bf in HPHP::IniSettingMap::IniSettingMap (this=0x7fff36f4d8f0) at hphp/runtime/base/config.cpp:370
#6  HPHP::Config::Iterate(std::function<void (HPHP::IniSettingMap const&, HPHP::Hdf const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>, HPHP::IniSettingMap const&, HPHP::Hdf const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool) (cb=..., ini=..., config=..., name=..., prepend_hhvm=prepend_hhvm@entry=true) at hphp/runtime/base/config.cpp:370
#7  0x0000000006927373 in HPHP::RuntimeOption::Load (ini=..., config=..., iniClis=..., hdfClis=..., messages=messages@entry=0x7fff36f4e170, cmd=...)
    at third-party-buck/platform007/build/libgcc/include/c++/trunk/bits/std_function.h:106
#8  0x000000000695aaee in HPHP::execute_program_impl (argc=argc@entry=21, argv=argv@entry=0x7fff36f4fac8) at hphp/runtime/base/program-functions.cpp:1716
```

The immediate cause of the crash is trying to access vmfp() inside tagFromPC before VM was initialized, resulting in null pointer dereference.
However, provenance tagging should have actually used runtime tag override, set in RuntimeOption::Load instead of trying to compute tag off PC.
The problem is: TagOverride short-circuits itself if Eval.ArrayProvenance is disabled.

So we run into the following sequence of events:
1. we start with EvalArrayProvenance=false - default value
2. TagOverride short-circuits and doesn't actually update the override
3. we parse config options and set EvalArrayProvenance=true
4. we try to create dict, decide that it needs provenance and try to compute a tag. Since there is no override set we fall back to tagFromPC and crash

To fix this I made TagOverride not short-circuit for this 1 specific call site.

The specific nature of a bug also explains, why it didn't get caught by any of prior testing:
- hphp tests could not catch it, because they run with minimal config, which doesn't exercise Config::Iterate
- I didn't specifically test sandbox with array provenance enabled, which would catch it
- This wouldn't be caught in servicelab, since we enable provenance in SL by changing default values. Also, I didn't run SL with provenance enabled for this.

What would catch this is starting either sandbox or prod-like web server with actual config.hdf or config.hdf.devrs and array provenance enabled via config or commandline options.

Differential Revision: D20974470

fbshipit-source-id: 6474fe4e4cf808c4e8572539119cd57374658877
facebook-github-bot pushed a commit that referenced this issue Jul 14, 2020
Summary:
I was trying to run servicelab with provenance enabled and had run into HHBC crash. Stack trace:

```
#0  0x0000000005e2b553 in HPHP::arrprov::tagFromPC () at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v5c9e8e3/hphp/runtime/base/rds-header.h:127
#1  0x0000000005e2f059 in HPHP::arrprov::tagStaticArr (ad=<optimized out>, tag=...) at hphp/runtime/base/array-provenance.cpp:297
#2  0x00000000007b8b5b in HPHP::ArrayData::CreateDArray (tag=..., tag=...) at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v5c9e8e3/hphp/runtime/base/array-data-inl.h:60
#3  HPHP::Array::CreateDArray () at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v5c9e8e3/hphp/runtime/base/type-array.h:109
#4  HPHP::preg_match_impl (pattern=pattern@entry=0x7f408a805860, subject=subject@entry=0x7f408a805840, subpats=subpats@entry=0x0, flags=flags@entry=0, start_offset=start_offset@entry=0, global=false) at hphp/runtime/base/preg.cpp:1146
#5  0x00000000060bdf89 in HPHP::preg_match (offset=0, flags=0, matches=0x0, subject=<optimized out>, pattern=0x7f408a805860) at buck-out/opt-hhvm-lto/gen/hphp/runtime/headers#header-mode-symlink-tree-only,headers,v5c9e8e3/hphp/runtime/base/memory-manager-inl.h:114
#6  HPHP::preg_match (flags=0, offset=0, matches=0x0, subject=..., pattern=...) at hphp/runtime/base/preg.cpp:1382
#7  HPHP::Config::matchHdfPattern (value=..., ini=..., hdfPattern=..., name=..., suffix=...) at hphp/runtime/base/config.cpp:395
#8  0x000000000885be96 in HPHP::(anonymous namespace)::applyBuildOverrides (ini=..., config=...) at hphp/util/hdf.cpp:90
#9  0x000000000886f000 in HPHP::prepareOptions (argv=<optimized out>, argc=51, po=...) at hphp/compiler/compiler.cpp:450
#10 HPHP::compiler_main (argc=51, argv=<optimized out>) at hphp/compiler/compiler.cpp:157
#11 0x00007f4120b001a6 in __libc_start_main (main=0x1cf2080 <main(int, char**)>, argc=52, argv=0x7ffda8b93048, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffda8b93038) at ../csu/libc-start.c:308
#12 0x000000000b31cb5a in _start () at ../sysdeps/x86_64/start.S:120
```

Differential Revision: D22519863

fbshipit-source-id: 2ce590181d7f7259e490ce3206c08e76f4eaaa4b
facebook-github-bot pushed a commit that referenced this issue Nov 2, 2020
Summary:
My goal is to get a handle on telemetry about which decls are read, when, and why. A bunch of costly decls are read during pocket-universes, and a few other places. To get higher-value telemetry, I want to know which filename we're currently checking at the time we fetch decls. That's what this diff achieves.

With this diff, I complete my project of getting high-quality telemetry about what/when/why/where are decls being read. Hurrah.

Reviewed By: CatherineGasnier

Differential Revision: D24585863

fbshipit-source-id: 7d338b5d58bb19e6a6bc4364ece38c0958948cc5
facebook-github-bot pushed a commit that referenced this issue May 11, 2021
Summary:
This diff addresses Monitor_connection_failure. It's all related to the question of how we behave when we have a huge number of incoming requests. The end result is (1) no more Monitor_connection_failure ever, (2) hh will wait indefinitely, or up to --timeout, for a connection to the server.
```
[client] * N -> [monitor] -> [server]
```
1. The server has a finite incoming monitor->server pipe, and it runs an infinite loop: (1) pick the next workitem of the incoming pipe, (2) that frees up a slot in the pipe so the monitor will be able to write the next item into it, (3) do a typecheck if needed, (4) handle the workitem
2. The monitor has a socket with a finite incoming queue, and runs an infinite loop: (1) pick the next workitem off the incoming queue, (2) that frees up a slot in the queue so the next client will be able to write into it, (3) attempt to write the workitem on the monitor->server pipe, but if that queue has remained full for 4s then communicate back to the workitem's client that it was busy.
3. Each client invocation attempts to put a workitem on the monitor's queue. It has a timeout of 1s. If the monitor's queue was full, it fails fatally with Monitor_connection_failed. If the monitor failed to handoff then it fails with Server_hung_up_should_retry and retries exponentially.

## Callers of hh_client should decide timeouts.

There's a "hh_client --timeout" parameter (by default infinite). The right design is that if the server is too busy, we should wait until it's available, up to that timeout. It's not right for us to unilaterally fatal when the server is busy. That's what this diff achieves. This means we no longer have a finite queue of hh_client requests (manifesting as Monitor_connection_failure roughly when it gets too big); it's now an infinite queue, embodied in the unix process list, as a set of processes all waiting for their chance.

> Note: HackAst generally doesn't specify a timeout. (1) If arc lint kicks off too many requests, that used to manifest as some of them getting failures but hh_server becomes available after 30-60s as they all fail; now it'll just go in a big queue, and the user will see in the spinner that hh_server is busy with a linter. (2) If an flib test kicks off too many requests, that used to manifest as some of them getting Monitor_connection_failure failures; now it'll manifest as the whole flib test timing out. (3) If a codemod kicks off too many requests, that used to manifest as the codemod failing with Monitor_connection_failure; now it'll manifest as the codemod taking longer but succeeding.

## It's all about backpressure.

Imagine that the system is full to capacity with short requests. Backpressure from the server is indicated by it only removing items from the monitor->server pipe as the server handles requests. This backpressure goes to the monitor, which can only handoff to the server once per server-handling, and hence can only pull items off its own incoming queue at that time too. This backpressure goes to the clients, which will timeout if they can't put an item on the monitor's queue. To make backpressure flow correctly, this diffstack makes the following changes:
1. The monitor will wait up to 30s for the server to accept a workitem. All normal server requests will be handled in this time. The ones that won't are slow find-all-refs and slow typechecks. (The previous 4s meant that even normal requests got the user-facing message "server isn't responding; reconnecting...")
2. The client will have a timeout up to "hh_client --timeout" in its attempts to put work onto the monitor's incoming queue. If the user didn't specify --timeout (i.e. infinite) then I put a client timeout of 60s plus automatic immediate retries. (The previous Monitor_connection_failure fatal error was the whole problem).

## We never want the server to go idle.

The best way for the monitor to wait is *as part of* its attempt to handoff to the server. This way, as soon as the server is ready, it will have another workitem and will never go idle. Likewise, the best way for a client to wait is *as part of* its attempt to establish a connection to the monitor and put a workitem on the monitor's incoming queue. That way, as soon as the monitor is ready, it will have another workitem and will never go idle. We should *not* be waiting in `Unix.sleep`; that's always wrong.  Effectively, our clients will park themselves in the unix process list, and the kernel will know that they're waiting for a slot to become available in the monitor's incoming queue, and will wake them up when it becomes available. This is a better than the current solution of cancelling and retrying ever 1s, a kind of "busy spin loop" which pointlessly burns CPU.

## Why is the monitor-to-server handoff timeout 30s?

For the handoff timeout, why did I put 30s rather than infinite?

There is only one scenario where we will exceed the 30s timeout. That's when both (1) the server's queue is full because there's been a high rate of incoming requests, (2) the server's current work, either for a file-change-triggered typecheck or a costly client request, takes longer than 30s.

*This scenario is a misuse of hack*. It simply doesn't make sense to have a high volume of costly requests. The computer will never be able to keep up. It's fine to have a high volume of cheap requests e.g. from typical Aurora linters. It's fine to have a low volume of expensive requests e.g. from a file change, or --find-refs. But the combination should be warned about. When the handoff times out, the monitor will send Monitor_failed_to_handoff to the client, which will display to the user "Hack server is too busy", and fail with exit code Server_hung_up_should_retry, and find_hh.sh will display "Reconnecting..." and retry with exponential backoff. That's not the mechanism I'd have chosen, but it's at least a reasonable mechanism to alert to the user that something's seriously wrong but still without failing.

If I'd used an infinite timeout then the user would never see "Hack is too busy. Reconnecting...". Also I have a general distrust of infinite timeouts. I worry that there will be cases where something got stuck, and I'd like for the user to at least become unstuck after a minute. If I'd used a smaller timeout like the current 4s, then the user will see this "Reconnecting..." message even in reasonable high-request-rate scenarios like --type-at-pos-batch. 30s feels like a good cutoff. It will display a message in the CLI if hack is being misused, as I described.

## Why is the client timeout 60s?

In the (default) case where the user didn't specify a --timeout parameter and we therefore use infinite timeout, I again got cold feet at putting an infinite timeout. I instead set it at 60s plus immediate retry if that times out. This is a bit like a busy spin loop, but it's not terribly busy. I got cold feet because there might be unknown other causes of failure and I didn't want them to just hang indefinitely.

Let's examine what the client code actually does for the "60s/retry" case:
1. Server_died. This happens from ServerMonitor sending Prehandoff.Server_died or Prehandoff.Server_died_config_changed. (I think there are bugs here; if the server died then the current hh_client should exit with code 6 "server_hung_up_should_retry" so that find_hh.sh will launch a new version of the hh_client binary. But I'll leave that for now). In this case hh_client retries
2. Timeout for the monitor to accept the request. This means there is backpressure from the monitor telling clients to slow down. This is the case where hh_client used to fatally exit with Monitor_connection_failure exit code 9. Now, with this diff, it hits the 60s timeout then immediately retries.
3. Timeout AFTER the monitor has accepted the request but before it's handed off. In this case hh_client has always retried
4. Timeout while the monitor attempts to handoff. This is the "30s monitor handoff timeout" discussed above. In this case hh_client exits with exit code 6, and find_hh.sh retries with exponential backoff, and the user sees "Server has hung up. Reconnecting..."
5. We don't even measure timeout while waiting for the server to process the request.

In the past (see comments in previous diff), if case 2 just does an immediate retry, then the test plan gets into a problem state where the monitor is just stuck trying to read client version. Previously it had been described as "wedged". I think that was caused by clients having a low timeout of just 1s, smaller indeed than the monitor->server timeout. But since I changed the client timeout to 60s that no longer happens.

*30s < 60s to avoid livelock pathology.*

The client's timeout must be comfortably longer than the monitor's timeout. Imagine if it were reversed, and the client's timeout was 30s while the monitor's was 60s. Therefore ever workitem on the monitor's queue would be stale by the time the monitor picks it up, and it won't get anything to send to the server, and after it burns through its queue rejecting every one (since each client is no longer around) then it'll have to just wait until a client retries. That would be a self-inflicted busy loop.

Let's spell it out. It had been observed in June 2017 that if there were 1000 parallel clients then the monitor became "wedged". It didn't define precisely what was observed, but D5205759 (b57d70b) explained a hypothesis:
1. Imagine the monitor's incoming socket queue is size 1000, and there are 1000 parallel clients.
2. Imagine that the first 999 of these clients has already given up (due to timeout), and so as the monitor works through them, for each one when it attempts to send Connection_ok back to that client, it will get EPIPE. Imagine it takes the monitor 1ms for this.
3. Clients in general have a 1s timeout which encompasses both waiting until the monitor enqueues it, and waiting until the monitor responds to it. If this deadline expires then they wait 0.1s and retry.
4. Imagine our client was number 1000, but because it took 999ms already to get EPIPE back from the earlier items, then by the time the monitor is ready to deal with us, then we've probably expired our 1s time limit, so the monitor will cancel us too.
5. In this time 999 other clients have again added themselves to the queue. We too will add ourselves to the queue. And the cycle will repeat again.

I experimented prior to this diffstack but was unable to reproduce this behavior. What I instead observed was that the client was in an ongoing loop of trying to connect to the monitor but failing because the monitor's incoming queue was full, and the monitor had failed to pick up anything from its queue for over five minutes. Here's what the monitor log and pstack looked like:
```
[2021-05-04 15:31:28.825] [t#O8Dg1HTCkS] read_version: got version {"client_version":"","tracker_id":"t#O8Dg1HTCkS"}, started at [2021-05-04 15:31:28.825]
[2021-05-04 15:31:28.825] [t#O8Dg1HTCkS] sending Connection_ok...
[2021-05-04 15:31:28.825] SIGPIPE(-8)
[2021-05-04 15:31:28.825] Ack_and_handoff failure; closing client FD: Unix.Unix_error(Unix.EPIPE, "write", "")
[2021-05-04 15:31:28.825] [t#oKIHi8xMAS] read_version: got version {"client_version":"","tracker_id":"t#oKIHi8xMAS"}, started at [2021-05-04 15:31:28.825]
[2021-05-04 15:31:28.825] [t#oKIHi8xMAS] sending Connection_ok...
[2021-05-04 15:31:28.825] SIGPIPE(-8)
[2021-05-04 15:31:28.825] Ack_and_handoff failure; closing client FD: Unix.Unix_error(Unix.EPIPE, "write", "")

#1  0x00000000025d0b31 in unix_select (readfds=<optimized out>, writefds=<optimized out>, exceptfds=<optimized out>, timeout=113136840) at select.c:94
#2  0x00000000022ef1f4 in camlSys_utils__select_non_intr_8811 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/utils/sys/sys_utils.ml:592
#3  0x00000000022a2666 in camlMarshal_tools__read_217 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/utils/marshal_tools/marshal_tools.ml:136
#4  0x00000000022a302b in camlMarshal_tools__from_fd_with_preamble_383 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/utils/marshal_tools/marshal_tools.ml:263
#5  0x0000000001f90116 in camlServerMonitor__read_version_2818 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/monitor/serverMonitor.ml:315
#6  0x0000000001f9106a in camlServerMonitor__ack_and_handoff_client_4701 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/monitor/serverMonitor.ml:548
#7  0x0000000001f92509 in camlServerMonitor__check_and_run_loop__4776 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/monitor/serverMonitor.ml:842
#8  0x0000000001f920cc in camlServerMonitor__check_and_run_loop_inner_5965 () at /data/users/ljw/fbsource/fbcode/hphp/hack/src/monitor/serverMonitor.ml:779
```
The client log is monstrously huge and full, for the final five minutes, of repeated (and failed-due-to-timeout) attempts to open the socket to the monitor:
```
[2021-05-04 15:53:00.223] [Check#IRhKGkxQn7] [t#QpQxPnW9wt] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.223] [Check#0NzdVomX9R] [t#6QT1JEbgJK] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.223] [Check#I9hYXmPzb0] [t#o9PRU7m1lM] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#CLtBMgCEVG] [t#WwlyQwUDhn] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.224] [Check#CLtBMgCEVG] [t#WwlyQwUDhn] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.224] [Check#zXvag0PQCK] [t#2NOmFISCOc] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.224] [Check#YoJPxiGVjK] [t#CICotuALSY] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.224] [Check#jCTjq2AcPp] [t#HvO1E3Rv10] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.224] [Check#VShC3XccG0] [t#LLbhdQhcch] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#YoJPxiGVjK] [t#CICotuALSY] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.224] [Check#MhnHnQmHIj] [t#u3gm7WR4B3] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#jCTjq2AcPp] [t#HvO1E3Rv10] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.224] [Check#U72xNuSb7N] [t#S47cx27Lta] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#zXvag0PQCK] [t#2NOmFISCOc] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.224] [Check#eGHledjDNT] [t#bQPqB2iG6c] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.224] [Check#7xj4rdPkhA] [t#NKGlmJYKAC] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.225] [Check#7xj4rdPkhA] [t#NKGlmJYKAC] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.225] [Check#kxQ7dr1Ve2] [t#JKFk5Pl5DB] [client-connect] ClientConnect.connect: attempting MonitorConnection.connect_once
[2021-05-04 15:53:00.225] [Check#kxQ7dr1Ve2] [t#JKFk5Pl5DB] Connection_tracker.Client_start_connect
[2021-05-04 15:53:00.225] [Check#s5HbtmgdVN] [t#ywpyouI5po] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.225] [Check#5192H11uaP] [t#ku2JO0LNvX] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
[2021-05-04 15:53:00.225] [Check#yOV3DR3w9] [t#r1irfsdLn5] CLIENT_CONNECT_ONCE_FAILURE {"kind":"Connection_to_monitor_Failure","server_exists":true,"phase":"ServerMonitorUtils.Connect_open_socket","reason":"timeout","exn":null,"exn_stack":null}
```

Differential Revision: D28205345

fbshipit-source-id: a0f011ff2bcb379c3186ae5e319c4d1e9912c988
facebook-github-bot pushed a commit that referenced this issue May 13, 2021
Summary:
The original/'fixed' version of this regex put the `\d+` inside a character class, so the `+` never worked correctly for double-digit counts, resulting in errors like this.

```
> hphp/test/run --retranslate-all 5

Fatal error: Uncaught exception 'HH\InvariantException' with message 'unexpected output from shell_exec in runif_test_for_feature: 'Notice: File could not be loaded: 0'' in /data/users/wkl/fbsource/fbcode/hphp/test/run.php:2133
Stack trace:
#0 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2133): HH\invariant_violation()
#1 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2189): runif_test_for_feature()
#2 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2326): runif_function_matches()
#3 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2965): runif_should_skip_test()
#4 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(2935): run_test()
#5 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(1988): run_and_log_test()
#6 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(3770): child_main()
#7 /data/users/wkl/fbsource/fbcode/hphp/test/run.php(3897): main()
#8 (): run_main()
#9 {main}
```

how_to_regex_garabatokid

Reviewed By: ottoni, mofarrell

Differential Revision: D28401997

fbshipit-source-id: 54dff6a21612cdeea53ddc31e99f4e41fd8205c9
facebook-github-bot pushed a commit that referenced this issue Jul 8, 2021
Summary:
My test runs are failing with errors like this: https://www.internalfb.com/intern/testinfra/diagnostics/5629499593789230.281474979307742.1625688677/

Text:

```
Failed to list tests.
STDOUT:{"args":"-m interp,jit","name":"hphp/test/server/debugger/tests/runTest1.php"}

STDERR:
Fatal error: Uncaught exception 'InvalidOperationException' with message 'Implicit null to string conversion for string concatenation/interpolation' in /data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/fbcode/hphp/test/run.php:195
Stack trace:
#0 /data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/fbcode/hphp/test/run.php(3661): success()
#1 /data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/fbcode/hphp/test/run.php(3942): main()
#2 (): run_main()
#3 {main}
```

Following the stacktrace, we see that the error occurs here:

https://www.internalfb.com/code/fbsource/[4981de7c3ed6cbbcd309a95da5b0c924e7c00b7e][history]/fbcode/hphp/test/run.php?lines=194-197

One frame up:

https://www.internalfb.com/code/fbsource/[4981de7c3ed6cbbcd309a95da5b0c924e7c00b7e][history]/fbcode/hphp/test/run.php?lines=3661

And `list_tests()` is a void function:

https://www.internalfb.com/code/fbsource/[4981de7c3ed6cbbcd309a95da5b0c924e7c00b7e][history]/fbcode/hphp/test/run.php?lines=778-795

So if we can't insert nulls into strings, this is bound to fail every time.

Reviewed By: DavidSnider

Differential Revision: D29597205

fbshipit-source-id: 852ffccf2f337b6dc1cb0909b3a276d97eced04f
facebook-github-bot pushed a commit that referenced this issue Jul 18, 2022
Summary:
Environment looping for loops was buggy in a very interesting way. This diff fixes it.

Consider the following fragment where the first dictionary is allocated at position `p` and the second one at `q`.
```
$d = dict['p' => 42];
while(...) {
  $d = dict['q' => true];
}
```

We were generating the constraints,
```
p -> #1 // line 1
q -> #2 // line 3
#1 \/ #2 -> #3 // line 4 on exit
#2 -> #1 // line 2-4
```
The last constraint is added because the definition of `$d` at the end of the loop needs to be related to the beginning of the loop.

However, this implies the following
```
p -> #1
q -> #1
q -> #2
```

The way we compute optional fields is if the join point has one entity in one branch but not in the other. So the solver concluded that `p` was an optional field but `q` was not (despite the fact that it is optional due to the fact that the loop body might never run)!

The problem is that we were using the same environment for at the beginning of the loop body and right before it. The solution is to simply create a redirection when we start typechecking the loop body and loop back onto that. This leads to the constraints:

```
p -> #1 // line 1
#1 -> #2 // due to refresh
q -> #3 // line 3
#1 \/ #3 -> #4 // line 4 on exit
#3 -> #2 // line 2-4
```

Then the solver concludes
```
p -> #1
q -> #3
q -> #2
```

Consequently, the solver concludes that at the join point `#4` both `p` and `q` are optional as one doesn't exist in `#1` and the other doesn't exist in `#3`.

Differential Revision: D37689958

fbshipit-source-id: 66cc8242fd87d99b5c495ff2747d60ba04a60a5a
facebook-github-bot pushed a commit that referenced this issue Oct 7, 2022
Summary:
We have seen deadlock running `terminationHandler` -> `hasSubscribers` in 2 threads.
It's unclear which other thread is holding the lock.

To make things easier to debug next time, let's change terminationHandler (and
also main.cpp) to bypass the logging lock and write to stderr directly.

Related threads (all threads in P536343453):

  Thread 11 (LWP 3275661):
  #0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
  #1  0x0000000001cc995b in folly::detail::(anonymous namespace)::nativeFutexWaitImpl (addr=<optimized out>, expected=<optimized out>, absSystemTime=<optimized out>, absSteadyTime=<optimized out>, waitMask=<optimized out>) at fbcode/folly/detail/Futex.cpp:126
  #2  folly::detail::futexWaitImpl (futex=0x89, futex@entry=0x7f1c3ac2ef90, expected=994748889, absSystemTime=absSystemTime@entry=0x0, absSteadyTime=<optimized out>, absSteadyTime@entry=0x0, waitMask=waitMask@entry=1) at fbcode/folly/detail/Futex.cpp:254
  #3  0x0000000001d34bce in folly::detail::futexWait<std::atomic<unsigned int> > (futex=0x7f1c3ac2ef90, expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/detail/__futex__/headers/folly/detail/Futex-inl.h:96
  #4  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever::doWait (this=<optimized out>, futex=..., expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:718
  #5  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::futexWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1184
  #6  0x0000000001cd42b2 in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::yieldWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1151
  #7  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::waitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1109
  #8  0x0000000001e7e14c in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7f1c149f88e4: 118379409, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1664
  #9  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1356
  #10 folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lock_shared (this=0x7f1c3ac2ef90) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:495
  #11 std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >::shared_lock (this=<optimized out>, __m=...) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/shared_mutex:727
  #12 0x0000000002d765fd in folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::doLock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>, std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0>, 0> (mutex=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1493
  #13 folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::LockedPtr (this=0x7f1c149f8928, parent=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1272
  #14 folly::SynchronizedBase<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, (folly::detail::SynchronizedMutexLevel)2>::rlock (this=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:229
  #15 watchman::Publisher::hasSubscribers (this=<optimized out>) at fbcode/watchman/PubSub.cpp:117
  #16 0x0000000002eca798 in watchman::Log::log<char const (&) [39], char const*, char const (&) [3]> (this=<optimized out>, level=level@entry=watchman::ABORT, args=..., args=..., args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:42
  #17 0x0000000002ec9ba7 in watchman::log<char const (&) [39], char const*, char const (&) [3]> (level=watchman::ABORT, args=..., args=..., args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:121
  #18 (anonymous namespace)::terminationHandler () at fbcode/watchman/SignalHandler.cpp:159
  #19 0x00007f1c3b0c7b3a in __cxxabiv1::__terminate (handler=<optimized out>) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
  #20 0x00007f1c3b0c7ba5 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:58
  #21 0x0000000001c38c8b in __clang_call_terminate ()
  #22 0x0000000003284c9e in folly::detail::terminate_with_<std::runtime_error, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&> (args=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/lang/__exception__/headers/folly/lang/Exception.h:93
  #23 0x0000000003281bae in folly::terminate_with<std::runtime_error, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&> (args=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/lang/__exception__/headers/folly/lang/Exception.h:123
  #24 folly::SingletonVault::fireShutdownTimer (this=<optimized out>) at fbcode/folly/Singleton.cpp:499
  #25 0x0000000003281ad9 in folly::(anonymous namespace)::fireShutdownSignalHelper (sigval=...) at fbcode/folly/Singleton.cpp:454
  #26 0x00007f1c3b42b939 in timer_sigev_thread (arg=<optimized out>) at ../sysdeps/unix/sysv/linux/timer_routines.c:55
  #27 0x00007f1c3b41fc0f in start_thread (arg=<optimized out>) at pthread_create.c:434
  #28 0x00007f1c3b4b21dc in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

  ...

  Thread 1 (LWP 3201992):
  #0  syscall () at ../sysdeps/unix/sysv/linux/x86_64/syscall.S:38
  #1  0x0000000001cc995b in folly::detail::(anonymous namespace)::nativeFutexWaitImpl (addr=<optimized out>, expected=<optimized out>, absSystemTime=<optimized out>, absSteadyTime=<optimized out>, waitMask=<optimized out>) at fbcode/folly/detail/Futex.cpp:126
  #2  folly::detail::futexWaitImpl (futex=0x89, futex@entry=0x7f1c3ac2ef90, expected=994748889, absSystemTime=absSystemTime@entry=0x0, absSteadyTime=<optimized out>, absSteadyTime@entry=0x0, waitMask=waitMask@entry=1) at fbcode/folly/detail/Futex.cpp:254
  #3  0x0000000001d34bce in folly::detail::futexWait<std::atomic<unsigned int> > (futex=0x7f1c3ac2ef90, expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/detail/__futex__/headers/folly/detail/Futex-inl.h:96
  #4  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever::doWait (this=<optimized out>, futex=..., expected=137, waitMask=1) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:718
  #5  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::futexWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1184
  #6  0x0000000001cd42b2 in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::yieldWaitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1151
  #7  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::waitForZeroBits<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, goal=128, waitMask=1, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1109
  #8  0x0000000001e7e14c in folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, state=@0x7ffd2d5be924: 118379408, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1664
  #9  folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lockSharedImpl<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::WaitForever> (this=0x7f1c3ac2ef90, token=0x0, ctx=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:1356
  #10 folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>::lock_shared (this=0x7f1c3ac2ef90) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__shared_mutex__/headers/folly/SharedMutex.h:495
  #11 std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >::shared_lock (this=<optimized out>, __m=...) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/shared_mutex:727
  #12 0x0000000002d765fd in folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::doLock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault>, std::shared_lock<folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0>, 0> (mutex=...) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1493
  #13 folly::LockedPtr<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> > const, folly::detail::SynchronizedLockPolicy<(folly::detail::SynchronizedMutexLevel)2, (folly::detail::SynchronizedMutexMethod)0> >::LockedPtr (this=0x7ffd2d5be968, parent=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:1272
  #14 folly::SynchronizedBase<folly::Synchronized<watchman::Publisher::state, folly::SharedMutexImpl<false, void, std::atomic, folly::SharedMutexPolicyDefault> >, (folly::detail::SynchronizedMutexLevel)2>::rlock (this=<optimized out>) at buck-out/v2/gen/fbcode/110b607930331a92/folly/__synchronized__/headers/folly/Synchronized.h:229
  #15 watchman::Publisher::hasSubscribers (this=<optimized out>) at fbcode/watchman/PubSub.cpp:117
  #16 0x0000000002ecac20 in watchman::Log::log<char const (&) [59]> (this=<optimized out>, level=level@entry=watchman::ABORT, args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:42
  #17 0x0000000002ec9b24 in watchman::log<char const (&) [59]> (level=watchman::ABORT, args=...) at buck-out/v2/gen/fbcode/110b607930331a92/watchman/__logging__/headers/watchman/Logging.h:121
  #18 (anonymous namespace)::terminationHandler () at fbcode/watchman/SignalHandler.cpp:165
  #19 0x00007f1c3b0c7b3a in __cxxabiv1::__terminate (handler=<optimized out>) at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:48
  #20 0x00007f1c3b0c7ba5 in std::terminate () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:58
  #21 0x0000000002d8cde1 in std::thread::~thread (this=0x7f1c3ac2ef90) at fbcode/third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/std_thread.h:152
  #22 0x00007f1c3b3cc8f8 in __run_exit_handlers (status=1, listp=0x7f1c3b598658 <__exit_funcs>, run_list_atexit=<optimized out>, run_dtors=<optimized out>) at exit.c:113
  #23 0x00007f1c3b3cca0a in __GI_exit (status=<optimized out>) at exit.c:143
  #24 0x00007f1c3b3b165e in __libc_start_call_main (main=0x2d11220 <main(int, char**)>, argc=2, argv=0x7ffd2d5bec78) at ../sysdeps/nptl/libc_start_call_main.h:74
  #25 0x00007f1c3b3b1718 in __libc_start_main_impl (main=0x2d11220 <main(int, char**)>, argc=2, argv=0x7ffd2d5bec78, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffd2d5bec68) at ../csu/libc-start.c:409
  #26 0x0000000002d0e181 in _start () at ../sysdeps/x86_64/start.S:116

Reviewed By: xavierd

Differential Revision: D40166374

fbshipit-source-id: 7017e20234e5e0a9532eb61a63ac49ac0020d443
facebook-github-bot pushed a commit that referenced this issue Mar 8, 2023
Summary:
AsyncUDPSocket test cases are crashing when running under llvm15. During
debugging it seems that the issue is the fact that the code tries to allocated 0
size array. Changing the code to prevent such allocation.

This is not very clean why to fix, but I am not sure if there is better one.
Please let me know if you have any suggestions.

Sample crash:
```
$ buck test //folly/io/async/test:async_udp_socket_test
...
stdout:
Note: Google Test filter = AsyncUDPSocketTest.TestDetachAttach
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from AsyncUDPSocketTest
[ RUN      ] AsyncUDPSocketTest.TestDetachAttach

stderr:
fbcode/folly/io/async/AsyncUDPSocket.cpp:699:10: runtime error: variable length array bound evaluates to non-positive value 0
    #0 0x7f4d8ed93704 in folly::AsyncUDPSocket::writev(folly::SocketAddress const&, iovec const*, unsigned long, int) fbcode/folly/io/async/AsyncUDPSocket.cpp:698
    #1 0x7f4d8ed9081f in folly::AsyncUDPSocket::writeGSO(folly::SocketAddress const&, std::unique_ptr<folly::IOBuf, std::default_delete<folly::IOBuf>> const&, int) fbcode/folly/io/async/AsyncUDPSocket.cpp:528
    #2 0x7f4d8ed900b2 in folly::AsyncUDPSocket::write(folly::SocketAddress const&, std::unique_ptr<folly::IOBuf, std::default_delete<folly::IOBuf>> const&) fbcode/folly/io/async/AsyncUDPSocket.cpp:660
    #3 0x350a05 in AsyncUDPSocketTest_TestDetachAttach_Test::TestBody() fbcode/folly/io/async/test/AsyncUDPSocketTest.cpp:914
    #4 0x7f4d90dd1ad5 in testing::Test::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2682:50
    #5 0x7f4d90dd1ad5 in testing::Test::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2672:6
    #6 0x7f4d90dd1c64 in testing::TestInfo::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2861:14
    #7 0x7f4d90dd1c64 in testing::TestInfo::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2833:6
    #8 0x7f4d90dd2321 in testing::TestSuite::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:3015:31
    #9 0x7f4d90dd2321 in testing::TestSuite::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2993:6
    #10 0x7f4d90dd2b1e in testing::internal::UnitTestImpl::RunAllTests() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:5855:47
    #11 0x7f4d90dd1d87 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:2665:29
    #12 0x7f4d90dd1d87 in testing::UnitTest::Run() /home/engshare/third-party2/googletest/1.11.0/src/googletest/googletest/src/gtest.cc:5438:55
    #13 0x7f4d90d5c990 in RUN_ALL_TESTS() fbcode/third-party-buck/platform010/build/googletest/include/gtest/gtest.h:2490
    #14 0x7f4d90d5c618 in main fbcode/common/gtest/LightMain.cpp:20
    #15 0x7f4d8ea2c656 in __libc_start_call_main /home/engshare/third-party2/glibc/2.34/src/glibc-2.34/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
    #16 0x7f4d8ea2c717 in __libc_start_main@GLIBC_2.2.5 /home/engshare/third-party2/glibc/2.34/src/glibc-2.34/csu/../csu/libc-start.c:409:3
    #17 0x33ea60 in _start /home/engshare/third-party2/glibc/2.34/src/glibc-2.34/csu/../sysdeps/x86_64/start.S:116
...
```

Reviewed By: russoue, dmm-fb

Differential Revision: D43858875

fbshipit-source-id: 93749bab17027b6dfc0dbc01b6c183e501a5494c
facebook-github-bot pushed a commit that referenced this issue Jul 28, 2023
Summary:
We have [at least] three redundant ADTs for representing Facts:
1. Rust hackc::FileFacts designed for interop with C++ FileFacts (#3)
2. Rust facts::Facts, carefully crafted to produce a specific JSON format
3. C++ HPHP::Facts::FileFacts constructed from folly::dynamic (from #2 JSON)

this diff massages #1 to get ready to replace #2 and #3 along with the
unnecessary representation changes that go with them.

Reviewed By: aorenste

Differential Revision: D46969023

fbshipit-source-id: 95f989774c995f094e216d6dc95f8a88aa0b2f06
facebook-github-bot pushed a commit that referenced this issue Aug 31, 2023
Summary:
Generic WebTransport interface

Principles:

1. It should be easy to write simple applications
2. The backpressure and error handling APIs should be understandable
3. The same generic API should work with traditional async code and coroutines

Futures is the best way to implement #3 because they can also be
awaited in a coroutine, without introducing a dependency on coroutines yet.

Reviewed By: lnicco

Differential Revision: D47422753

fbshipit-source-id: 6aad62cd399a9e8112cab37f4979003d560caf8a
facebook-github-bot pushed a commit that referenced this issue Mar 19, 2024
Summary:
Ahead of eliminating use of `internal_type` this diff avoids calling into `TCunion` by creating a helper which generates the corresponding disjunction and avoids adding `TCunion` to the inference environment by rewriting it in terms of a disjunction, specifically

instead of

```
ty1 <: TCunion(ty2, cstr_rhs)
```

we rewrite to

```
ty1 <: (ty2 | #3) &&& #3 [cstr] cstr_rhs
```

Differential Revision: D54947179

fbshipit-source-id: 5fc4f1574c8428d24737a609f4416d8c15dd03c3
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants