You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following PR #13511 it appears that there is not reference benchmark for SVMs in scikit-learn or in any side-project (sklearn-contrib).
This seems quite risky on the long run, maybe we should create one - especially to quantify the impact of changes to C code such as in PR #13511 .
I have been working quite a bit on this topic of creating reference benchmarks in the past years, leading to the creation of tools in the pytest ecosystem: pytest-cases and pytest-harvest, with a beginning of tutorial here (outdated I'm afraid). I can therefore certainly try to help with a benchmark framework structure if you find such an idea interesting.
However I do not know a good set of reference datasets to start with (apart from creating challenging ones "by hand").
The text was updated successfully, but these errors were encountered:
Following PR #13511 it appears that there is not reference benchmark for SVMs in scikit-learn or in any side-project (sklearn-contrib).
This seems quite risky on the long run, maybe we should create one - especially to quantify the impact of changes to C code such as in PR #13511 .
I have been working quite a bit on this topic of creating reference benchmarks in the past years, leading to the creation of tools in the pytest ecosystem:
pytest-cases
andpytest-harvest
, with a beginning of tutorial here (outdated I'm afraid). I can therefore certainly try to help with a benchmark framework structure if you find such an idea interesting.However I do not know a good set of reference datasets to start with (apart from creating challenging ones "by hand").
The text was updated successfully, but these errors were encountered: