Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
next: Changes queued for 2017.11 release #1250
Confirmed that the performance regression is due to the JIT barrier. The lukego-optimize performance with that change reverted matched master again. I will test using fewer barriers (on entry or exit to an app, but not both) and see if that is better. Otherwise the JIT barrier might not be suitable (too expensive) for app/engine transitions.
I have reverted the calls to
Let's wait for the standard Hydra tests to complete now and then we should be
@eugeneia There is a performance regression on the iperf benchmark but I propose that we ship this now anyway.
I have been combing through recent CI results with @wingo on Slack and it seems like the problem is caused by voodoo. I have another branch with almost identical contents that does not show the issue. The only difference is whether the
I am reluctant to make a "nonsense" change to "solve" this problem. I would prefer to accept it for now and focus on finding the root cause of why we see variance in the iperf benchmark. I see the new RaptorJIT tooling as the way to do this and so I want to spend my time now on integrating that with Snabb. Hence my willingness to accept this symptom of the root problem (wider variance on the iperf benchmark) for the moment.
Hypothetically if the problem is something obscure, like whether two Lua loop bytecode addresses hash into the same JIT hot counter, then there are probably very many different ways that it could be provoked (e.g. choice of C compiler version) and so I am not really confident that nailing down one such issue in the test environment would translate into a real world benefit. This would need to be solved more thoroughly in the JIT after seeing exactly what is really going on.