-
-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some regression with Cro::WebSocket #2644
Comments
Also, it eats memory like crazy during tests (like more than 6G). Not sure if that is related or if it's new. EDIT: just measured, a bit closer to the end of the testing phase the memory rapidly spikes to more than 5G |
After looking at a run under heaptrack I added a debug print of offset and count here https://github.com/MoarVM/MoarVM/blob/master/src/6model/reprs/VMArray.c#L926 (in asplice). Here's part of the end of the output:
|
A manual bisect points towards 541a4f1. |
The bisect did indeed point to the right commit. The code relied on an optimizer bug where in some cases, the success of a
Furthermore, the modified optimizer not making the mistake it used to feels as much luck as judgement, so we should also:
|
Otherwise, we can end up with, for instance, a `loop` that does a smart-match ending up causing a return from the block containing the loop itself, thus terminating not only the loop, but also skipping any code in the block after the loop. Related to #2644.
A Rakudo optimizer bug made it do so in some cases (including the one provided here). Covers rakudo/rakudo#2644.
Rakudo optimizer patched, spectest added. I will do a release of |
|
@jnthn yeah! Thank you! However, what about the memory issue? Any info on that? |
@AlexDaniel Yes, that was easily explained. The code relied on a bug for a loop the terminate. With the bug gone, the loop kept going on and on. In each iteration, it added stuff into an array, which thus grew and grew. A 5s is a long timeout, so some gigabytes in that time are easily achieved. I measured after the fix, and it's normal again. |
OK, nice. There's nothing to do here then, I guess? Tests were added in Raku/roast@e95d29d, maybe it can be tested better, but the issue is resolved I think. |
I think we've done as much as is reasonable. Deliberately re-introducing a bug into the optimizer is as likely to risk creating new problems, so I'd be very reluctant to do that. |
This happens almost every time (if not always). As far as I can see this issue didn't exist in 2018.12.
Not bisected yet.
IRC discussion: https://colabti.org/irclogger/irclogger_log/perl6?date=2019-01-25#l57
The text was updated successfully, but these errors were encountered: