-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Random segfault in the tests #127
Comments
Hmm maybe that's why my code in #119 fails. It's the same platform. https://travis-ci.com/bpfkorea/agora/jobs/221073787 |
BAD Github! |
@Geod24 you provided URL to sucessful green CI run? |
Found it |
v0.x.x is failing randomly too. |
|
Reproduced on Linux with:
|
Catched coredump after SIGABRT. Here is about 37 threads and one in this state: #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff743f535 in __GI_abort () at abort.c:79
#2 0x0000555555d4bd65 in core.thread.Fiber.allocStack(ulong, ulong) ()
#3 0x0000555555d4bdc3 in _D4core6thread5Fiber6__ctorMFNbDFZvmmZCQBlQBjQBf ()
#4 0x0000555555d34bc1 in std.concurrency.FiberScheduler.spawn(void() delegate) ()
#5 0x0000555555ab64dd in _D6geod249LocalRest__T9RemoteAPITC5agora4test4Base7TestAPIZQBl__T7spawnedTCQBpQBmQBk__T8TestNodeTCQCmQCjQCh18TestNetworkManagerZQBpZQCpFSQDz6common6ConfigQhZ__T6handleTSQGlQGh7CommandZQyMFNbQwZv (arg=...)
at /home/denizzz/Dev/agora/submodules/localrest/source/geod24/LocalRest.d:449
#6 0x0000555555ab6349 in _D6geod249LocalRest__T9RemoteAPITC5agora4test4Base7TestAPIZQBl__T7spawnedTCQBpQBmQBk__T8TestNodeTCQCmQCjQCh18TestNetworkManagerZQBpZQCpFSQDz6common6ConfigQhZ9__lambda6MFZ9__lambda5MFNbSQHbQGx7CommandZv (
cmd=...) at /home/denizzz/Dev/agora/submodules/localrest/source/geod24/LocalRest.d:485
#7 0x0000555555b5cf22 in _D3std11concurrency7Message__T3mapTDFNbS6geod249LocalRest7CommandZvZQBmMFQBmZv (this=<optimized out>, op=...) at /snap/ldc2/108/bin/../include/d/std/concurrency.d:163
#8 0x0000555555b5ccaf in _D3std11concurrency10MessageBox__T3getTS4core4time8DurationTDFNaNbNiNfCQCrQCq15OwnerTerminatedZvTDFNbNfS6geod249LocalRest11TimeCommandZvTDFNaNbNiNfSQBsQBo9FilterAPIZvTDFNbSQCqQCm8ResponseZvTDFNbSQDnQDj7CommandZvZQGwMFQGwMQGfMQEyMQDoMQCoMQBvZ13onStandardMsgMFKSQJyQJx7MessageZb (msg=...) at /snap/ldc2/108/bin/../include/d/std/concurrency.d:1998
#9 0x0000555555b5d523 in _D3std11concurrency10MessageBox__T3getTS4core4time8DurationTDFNaNbNiNfCQCrQCq15OwnerTerminatedZvTDFNbNfS6geod249LocalRest11TimeCommandZvTDFNaNbNiNfSQBsQBo9FilterAPIZvTDFNbSQCqQCm8ResponseZvTDFNbSQDnQDj7CommandZvZQGwMFQGwMQGfMQEyMQDoMQCoMQBvZ4scanMFKSQJoQJn__T4ListTSQKeQKd7MessageZQwZb (list=...) at /snap/ldc2/108/bin/../include/d/std/concurrency.d:2080
#10 0x0000555555b5c9fb in _D3std11concurrency10MessageBox__T3getTS4core4time8DurationTDFNaNbNiNfCQCrQCq15OwnerTerminatedZvTDFNbNfS6geod249LocalRest11TimeCommandZvTDFNaNbNiNfSQBsQBo9FilterAPIZvTDFNbSQCqQCm8ResponseZvTDFNbSQDnQDj7CommandZvZQGwMFQGwMQGfMQEyMQDoMQCoMQBvZb (this=0x7ffff7319f20, _param_0=..., _param_1=..., _param_2=..., _param_3=..., _param_4=..., _param_5=...) at /snap/ldc2/108/bin/../include/d/std/concurrency.d:2157
#11 0x0000555555ab5eff in _D3std11concurrency__T14receiveTimeoutTDFNaNbNiNfCQBwQBv15OwnerTerminatedZvTDFNbNfS6geod249LocalRest11TimeCommandZvTDFNaNbNiNfSQBsQBo9FilterAPIZvTDFNbSQCqQCm8ResponseZvTDFNbSQDnQDj7CommandZvZQGnFS4core4time8DurationQGuQFmQEbQDaQCgZb (duration=..., _param_1=..., _param_2=..., _param_3=..., _param_4=..., _param_5=...) at /snap/ldc2/108/bin/../include/d/std/concurrency.d:872
#12 0x0000555555ab5d3f in _D6geod249LocalRest__T9RemoteAPITC5agora4test4Base7TestAPIZQBl__T7spawnedTCQBpQBmQBk__T8TestNodeTCQCmQCjQCh18TestNetworkManagerZQBpZQCpFSQDz6common6ConfigQhZ9__lambda6MFZv ()
at /home/denizzz/Dev/agora/submodules/localrest/source/geod24/LocalRest.d:467
#13 0x0000555555d34e2e in std.concurrency.FiberScheduler.create(void() delegate).wrap() ()
#14 0x0000555555d4bbf2 in fiber_entryPoint ()
#15 0x0000000000000000 in ?? () Looks like too many fibers is spawned sometimes and stack will gone. |
Wow, interesting. We could try allocating more pages with the scheduler. How did you manage to reliably reproduce ? |
Rephrase: the free space on the stack ends.
Not very reliable: 1/20 chances to reproduce at each start. But I saved binary and core dump (3Gb) |
From experience, fibers can scale in the thousands without problem. |
However we should check how much space they allocate by default. Btw, did the |
4 pages on Linux.
Yes |
But it's the mac that's failing right? |
4 pages on |
I got a different one just now, it seems to happened in a collection cycle:
|
Another similar one:
|
Settting the .length property rather than calling new T[] causes a segfault in Druntime. Issue bosagora#127
Settting the .length property rather than calling new T[] causes a segfault in Druntime. Issue #127
Similar to @AndrejMitrovic 's, but happens in finalization (!)
|
|
This SEGV is now triggering on every PR... |
This has been fixed |
Observed on LDC 1.16, Mac OSX:
https://travis-ci.com/bpfkorea/agora/jobs/220748509
The text was updated successfully, but these errors were encountered: