PAC-Bayesian bounds computation related to "Risk Bounds for the Majority Vote [...]" (JMLR 2015)
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.

Majority Vote Bounds

PAC-Bayesian bounds computation related to a JMLR paper (see [1], or


This Python code depends on numpy and scipy librairies


Each bound computation routine is contained in a single file. The name of the file refers to the bound number in the paper :


You can compute a bound value by either:

  • Importing the file in a python project and calling the contained function, or
  • Executing the file from the command-line.

For instance:

$ python
Usage: empirical_gibbs_risk empirical_disagreement [m] [KLQP] [delta]
  PAC Bound TWO of Germain, Lacasse, Laviolette, Marchand and Roy (JMLR 2015)

    Compute a PAC-Bayesian upper bound on the Bayes risk by
    using the C-Bound. To do so, we bound *simultaneously*
    the disagreement and the joint error.

    empirical_gibbs_risk : Gibbs risk on the training set
    empirical_disagreement : Expected disagreement on the training set
    m : number of training examples
    KLQP : Kullback-Leibler divergence between prior and posterior
    delta : confidence parameter (default=0.05)

$ python 0.2 0.3 1000 2.0
empirical_gibbs_risk = 0.2
empirical_disagreement = 0.3
m = 1000
KLQP = 2.0
delta = 0.05
bayes risk bound = 0.360433


[1] Pascal Germain, Alexandre Lacasse, François Laviolette, Mario Marchand and Jean-Francis Roy. "Risk Bounds for the Majority Vote: From a PAC-Bayesian Analysis to a Learning Algorithm". Journal of Machine Learning Research (JMLR), volume 16 (Apr) p. 787-860, 2015.