Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discover downgrade performance trends over the releases #13

Open
UlisesGascon opened this issue Nov 14, 2022 · 9 comments
Open

Discover downgrade performance trends over the releases #13

UlisesGascon opened this issue Nov 14, 2022 · 9 comments

Comments

@UlisesGascon
Copy link
Member

I was wondering if in the past we have run "full" benchmarking tests against all the TLS version and if that results were stored. Maybe we can discover downgrade performance trends over the time if we comprare different releases.

This was aligned (a bit) with the baseline idea proposed by @RafaelGSS, but extending the comparison to more releases (not just the last one)

@mcollina
Copy link
Member

We used to have this, but we folded the team due to lack of contributions.

@sheplu
Copy link
Member

sheplu commented Nov 15, 2022

It would be very interesting indeed to have a set of benchmarks being run every weeks (?) and keep the value to have some charts or visual. Like you said it would help us visualize performance issues but also demonstrate optimization

@joyeecheung
Copy link
Member

We used to have it at https://github.com/nodejs/benchmarking/tree/master/benchmarks, I have no idea where the data went though (I think up to v12.x?) cc @mhdawson

@mhdawson
Copy link
Member

We used to publish benchmarking data to https://nodejs.org/benchmarking but as mentioned that was discontinued when as the benchmarking team peetered out.

There have also been a number of discussions around how to use the microbenchmarks to track performance between versions. The challenge was always that a full run takes a very long time (days) and then you have a large set of numbers to compare/validate. The goal was to try to figure out a subset that made sense/was valuable but we did not managed to figure that out. Instead the benchmarks that were run like acme air were inteted to be more real-world measurements and were faster turn run.

I do think tracking performance between versions would be valuable if we can line up people to contribute/review the results.

@wa-Nadoo
Copy link

There is a project with the similar goal, https://github.com/mscdex/nodebench. Maybe it can be used as a starting point for the implementation.

@UlisesGascon
Copy link
Member Author

UlisesGascon commented Nov 26, 2022

Thanks for the feedback and the historical context of the benchmarking and microbenchmarks. I agree with @mhdawson that maybe focusing on concrete microbenchmarks makes total sense, especially if we want to avoid long feedback loops.

Thanks @wa-Nadoo for the suggestion, the tool seems fantastic. I love the way the UI works, but there are some limitations like for fs due to the machine used to run the tests. I attach a screenshot as a reference 🙂

screencapture-mscdex-github-io-nodebench-2022-11-26-08_28_20

@anonrig
Copy link
Member

anonrig commented Feb 9, 2023

I think the work done by @RafaelGSS solves this issue.

@tniessen
Copy link
Member

tniessen commented Mar 6, 2023

@anonrig I can't seem to find any context on your comment. What work are you referring to exactly?

@anonrig
Copy link
Member

anonrig commented Mar 6, 2023

@tniessen I don't quite remember but @RafaelGSS did a fantastic job on https://github.com/RafaelGSS/nodejs-bench-operations, and working on generating reports & graphs at that time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants