-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update of TechEmpower web framework benchmark for latest Colossus version #660
Comments
Changes look good to me. Will be very interesting to see how the latest version performs as well as running on Scala 2.12. |
We were going to update the code to use the same json library as everyone else, but if this is faster I suppose we can go with it. Thanks. |
I have used
Here are results for top Scala & Java web frameworks:
Environment:
IMHO it is ugly method to test HTTP servers, but it seems the same approach is taken for these benchmarks by maintainers: http://tfb-logs.techempower.com/round-15/preview-3/colossus/ I would prefer to spend couple of nights to get a red pill, setup realistic environment and test how systems behave under load by measuring response times properly with |
I would expect the plaintext to be faster than json since the service has to do less work; it doesn't have to create JSON. I'm not a fan of the JSON test since at least for the case of colossus we are just testing the selected JOSN library. Changing the JSON library will change the results. Maybe we should only be in the plaintext test... |
IMHO it is because error between runs is greater than time of serialization of response. In the It can be started from the root of jsoniter-scala project directory by the following command:
Here are its results on my notebook:
Both of them are too efficient to show impact on request handing. |
Ah, when you're using In the techempower benchmarks, they do not use pipelining in the json test but they do in the plaintext test with a pipelining factor of 16. I don't know why they do it this way, but when we run benchmarks ourselves we need to make sure we replicate their parameters to get comparative results. Personally I agree the json benchmark is not particularly useful for us, but I would prefer we be in as many tests as possible and if that's the case then I'd also prefer we just use whatever is fastest for us. A hello-world benchmark is never going to be truly representative of actual performance and I think the whole thing is just a publicity stunt, so we may as well play the game. |
Using your script with pipeline depth = 16:
I got following results on the same env.:
|
Please review my changes that is merged into benchmarks already: TechEmpower/FrameworkBenchmarks@72003b9
...and also pending PR with JVM options tuned for better throughput: TechEmpower/FrameworkBenchmarks#3184
Now waiting for final 15-th round (or next preview-4). Here are results of preview-3 that was run at Nov 2017: http://tfb-logs.techempower.com/round-15/preview-3/colossus
The text was updated successfully, but these errors were encountered: