Skip to content

microprediction/awesome-python-benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

awesome-python-benchmarks

Statistical benchmarking of python packages.

Machine-learning benchmarks

Python time-series benchmarking

  • MCompetitions repository lists winning methods for the M4 and M5 contests. For example the LightGBM approach document can be found there, alongside other winners.

  • Time-Series Elo ratings considers methods for autonomous univariate prediction of relatively short sequences (400 lags) and ranks performance on predictions from 1 to 34 steps ahead.

  • Papers with code has a couple of benchmarks such as etth1.

Python black-box derivative free benchmarking

R Time-series

  • ForecastBenchmark automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains.

About

Benchmarking for python analytic packages

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published