Skip to content

Latest commit

 

History

History
55 lines (38 loc) · 1.58 KB

fe-der.md

File metadata and controls

55 lines (38 loc) · 1.58 KB
layout mathjax author affiliation e_mail date title chapter section topic theorem sources proof_id shortcut username
proof
true
Joram Soch
BCCN Berlin
joram.soch@bccn-berlin.de
2022-10-20 03:47:00 -0700
Derivation of the family evidence
Model Selection
Bayesian model selection
Family evidence
Derivation
P368
fe-der
JoramSoch

Theorem: Let $f$ be a family of $M$ generative models $m_1, \ldots, m_M$ with model evidences $p(y \vert m_1), \ldots, p(y \vert m_M)$. Then, the family evidence can be expressed in terms of the model evidences as

$$ \label{eq:FE-marg} \mathrm{FE}(f) = \sum_{i=1}^M p(y|m_i) , p(m_i|f) $$

where $p(m_i \vert f)$ are the within-family prior model probabilities.

Proof: This a consequence of the law of marginal probability for discrete variables

$$ \label{eq:prob-marg} p(y|f) = \sum_{i=1}^M p(y,m_i|f) $$

and the law of conditional probability according to which

$$ \label{eq:prob-cond} p(y,m_i|f) = p(y|m_i,f) , p(m_i|f) ; . $$

Since models are nested within model families, such that $m_i \wedge f \leftrightarrow m_i$, we have the following equality of probabilities:

$$ \label{eq:prob-equal} p(y|m_i,f) = p(y|m_i \wedge f) = p(y|m_i) ; . $$

Plugging \eqref{eq:prob-cond} into \eqref{eq:prob-marg} and applying \eqref{eq:prob-equal}, we obtain:

$$ \label{eq:ME-marg-qed} \mathrm{FE}(f) = p(y|f) = \sum_{i=1}^M p(y|m_i) , p(m_i|f) ; . $$