A community repository for benchmarking Bayesian methods
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
bayesian_benchmarks changed datapath from root Sep 6, 2018
tests changed datapath from root Sep 6, 2018
.coveragerc changed data folder again Jun 19, 2018
.gitignore Initial commit May 24, 2018
.travis.yml update travis yml Jun 21, 2018
LICENSE Initial commit May 24, 2018
README.md small fixes Jun 19, 2018
setup.py add config file to package Aug 8, 2018

README.md

Bayesian Benchmarks

Build Status codecov

This is a set of tools for evaluating Bayesian models, together with benchmark implementations and results.

Motivations:

  • There is a lack of standardized tasks that meaningfully assess the quality of uncertainty quantification for Bayesian black-box models.
  • Variations between tasks in the literature make a direct comparison between methods difficult.
  • Implementing competing methods takes considerable effort, and there little incentive to do a good job.
  • Published papers may not always provide complete details of implementations due to space considerations.

Aims:

  • Curate a set of benchmarks that meaningfully compare the efficacy of Bayesian models in real-world tasks.
  • Maintain a fair assessment of benchmark methods, with full implementations and results.

Tasks:

  • Classification and regression
  • Density estimation (real world and synthetic) (TODO)
  • Active learning
  • Adversarial robustness (TODO)

Current implementations:

Coming soon: