Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"different template cached" is more slowy than "different template, different context cached" #18

Closed
vollen opened this issue Apr 23, 2016 · 7 comments

Comments

@vollen
Copy link

vollen commented Apr 23, 2016

When i run the bench test like this:

local test = require "resty.template.microbenchmark"; 
test(100000)

The result is :

 Running 100000 iterations in each test
 Parsing Time: 0.612801
 Compilation Time: 2.201334 (template)
 Compilation Time: 0.008615 (template cached)
 Execution Time: 2.655195 (same template)
 Execution Time: 0.350653 (same template cached)
 Execution Time: 15.897813 (different template)
 Execution Time: 7.387499 (different template cached)
 Execution Time: 17.567453 (different template, different context)
 Execution Time: 0.967848 (different template, different context cached)
 Total Time: 47.649211

I found that "different template cached" is more slowy than "different template, different context cached". But i don't know why。

@bungle
Copy link
Owner

bungle commented Apr 23, 2016

I need to look at it, but this is a toy benchmark basically. I think it would be better to run these in separate scripts. Garbage collection can kick in. Memory allocation problems may kick in. There also seems to be a regression with LuaJIT with large iteration counts (possibly with memory related issues), try to run that test also with say Lua 5.3, and it will actually run faster than LuaJIT (with smaller iteration counts LuaJIT is faster, with larger iteration counts PUC-Lua is faster). At some point I need to write better benchmark, well, pull requests are welcomed also.

I don't know the reason why same context is slower. But maybe the function is better warmed up on second test, or GC kicks in or something. These should be run separately so that they don't affect each other.

@bungle
Copy link
Owner

bungle commented Apr 23, 2016

The cached functions are after all string concatenation, and the best thing that could happen is that Lua or LuaJIT introduces a proper string buffer.

@bungle
Copy link
Owner

bungle commented Apr 26, 2016

@lgfng Actually there was a bug in microbenchmark:
05447b6#diff-884dafb4ce5a8f2cc3d4e47a302b0cf9L109

there was constant 1000 for the last test.

@bungle
Copy link
Owner

bungle commented Apr 28, 2016

@lgfng, also check this:
LuaJIT/LuaJIT#168

It's really because LuaJIT is poorly handling hash collisions.

@funny-falcon
Copy link

Pull request LuaJIT/LuaJIT#169

@vollen
Copy link
Author

vollen commented May 9, 2016

@bungle When i run this test, there is not this bug, it is import by this commit.

As a newbee, very proud of that this issue can lead to a discuss about luajit. Thank you for your work.

Sorry for my poor English.

Regards.

@bungle
Copy link
Owner

bungle commented May 9, 2016

@lgfng, yes, it seems I have made a bad commit somewhere in the lines. It is fixed now. But the main issue remains, and is discussed in many places, the most recent patch proposal can be found from here:
LuaJIT/LuaJIT#174

Right now even running the default test with 1 000 iterations is affected by this. I'm not sure how possible is it for this issue to occur on normal code outside this benchmark, but I guess it is not impossible. You may always patch your OpenResty's LuaJIT with this patch, if you fear about it.

@bungle bungle closed this as completed Oct 28, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants