Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate using Hyperfine in benchmark scripts #98

Closed
jqnatividad opened this issue Nov 8, 2021 · 9 comments
Closed

Investigate using Hyperfine in benchmark scripts #98

jqnatividad opened this issue Nov 8, 2021 · 9 comments
Labels
enhancement New feature or request. Once marked with this label, its in the backlog.

Comments

@jqnatividad
Copy link
Owner

The current benchmark script has been improved, but it still lacks the rigor of a proper benchmark.

Investigate using hyperfine, and perhaps, we can even automate the benchmarks as part of the release process.

@jqnatividad jqnatividad added the enhancement New feature or request. Once marked with this label, its in the backlog. label Nov 8, 2021
@github-actions
Copy link

github-actions bot commented Jan 8, 2022

Stale issue message

@udsamani
Copy link

@jqnatividad Let me know if this is being worked upon ? Or I can pick this up !

@jqnatividad
Copy link
Owner Author

Hi @udsamani , @minhajuddin2510 just started working on it, but there's more stuff on the backlog he can work on.

So yeah, have a go!

@jqnatividad
Copy link
Owner Author

jqnatividad commented Aug 30, 2022

Also, NYC's 311 data is a very interesting dataset, and I'd like to keep it as the reference benchmark data.

But it'd be great if we parameterize the benchmark data so folks have the option to change it, and perhaps, maintain their own internal benchmarks using data that's more representative of their data workloads/pipelines.

https://github.com/jqnatividad/qsv/blob/master/docs/PERFORMANCE.md#benchmarking-for-performance

@jqnatividad
Copy link
Owner Author

Hi @udsamani , just checking to see if you have any questions about the old benchmark script...

@jqnatividad jqnatividad added the wontfix This will not be worked on label Oct 13, 2022
@jqnatividad
Copy link
Owner Author

With #542, we'll stick with the benchmark script, just making sure it runs the latest version of qsv, and just run the benchmark manually on release.

Will revisit using hyperfine in the future when we attempt to integrate automating benchmark generation as part of the release process, and adding wontfix tag for now...

@github-actions
Copy link

Stale issue message

@jqnatividad
Copy link
Owner Author

reopening and removing wontfix as we add this back to the backlog

@jqnatividad jqnatividad removed the wontfix This will not be worked on label Jun 27, 2023
jqnatividad added a commit that referenced this issue Aug 23, 2023
Awesome @minhajuddin2510 !

#98 has been a longstanding open issue and we can finally close it!

The next step is to integrate the benchmark into GitHub Actions CI so it's automatically updated with each release.

Thanks!
@jqnatividad
Copy link
Owner Author

Finally implemented by @minhajuddin2510 in #1237

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request. Once marked with this label, its in the backlog.
Projects
None yet
Development

No branches or pull requests

3 participants