These are modules to compute (or approximate/bound) the marginal log likelihood (MLL) of the GP model when applied to data. I.e., given a GP f ∼ 𝒢𝒫(μ, K), and data X, y, these modules compute/approximate
ℒ = pf(y∣X) = ∫p(y∣f(X)) p(f(X)∣X) df
This is computed exactly when the GP inference is computed exactly (e.g. regression w/ a Gaussian likelihood). It is approximated/bounded for GP models that use approximate inference.
These models are typically used as the "loss" functions for GP models (though note that the output of these functions must be negated for optimization).
gpytorch.mlls
gpytorch.mlls
These are MLLs for use with ~gpytorch.models.ExactGP
modules. They compute the MLL exactly.
ExactMarginalLogLikelihood
LeaveOneOutPseudoLikelihood
These are MLLs for use with ~gpytorch.models.ApproximateGP
modules. They are designed for when exact inference is intractable (either when the likelihood is non-Gaussian likelihood, or when there is too much data for an ExactGP model).
VariationalELBO
PredictiveLogLikelihood
GammaRobustVariationalELBO
DeepApproximateMLL
AddedLossTerm
InducingPointKernelAddedLossTerm
KLGaussianAddedLossTerm