Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Release 0.5.1
# Release 0.6

## New Algorithms

This update adds new variational inference algorithms in light of the flexibility added in the v0.5 update.
Specifically, the following measure-space optimization algorithms have been added:
Expand All @@ -7,6 +9,13 @@ Specifically, the following measure-space optimization algorithms have been adde
- `KLMinNaturalGradDescent`
- `KLMinSqrtNaturalGradDescent`

## Interface Change

The objective value returned by `estimate_objective` is now the value to be *minimized* by the algorithm.
For instance, for ELBO maximization algorithms, `estimate_objective` will return the negative ELBO.

## Behavior Change

In addition, `KLMinRepGradDescent`, `KLMinRepGradProxDescent`, `KLMinScoreGradDescent` will now throw a `RuntimException` if the objective value estimated at each step turns out to be degenerate (`Inf` or `NaN`). Previously, the algorithms ran until `max_iter` even if the optimization run has failed.

# Release 0.5
Expand Down
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "AdvancedVI"
uuid = "b5ca4192-6429-45e5-a2d9-87aec30a685c"
version = "0.5.1"
version = "0.6"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Expand Down
2 changes: 1 addition & 1 deletion bench/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
ADTypes = "1"
AdvancedVI = "0.5, 0.4"
AdvancedVI = "0.6"
BenchmarkTools = "1"
Bijectors = "0.13, 0.14, 0.15"
Distributions = "0.25.111"
Expand Down
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ StatsFuns = "4c63d2b9-4356-54db-8cca-17b64c39e42c"
[compat]
ADTypes = "1"
Accessors = "0.1"
AdvancedVI = "0.5, 0.4"
AdvancedVI = "0.6"
Bijectors = "0.13.6, 0.14, 0.15"
DataFrames = "1"
DifferentiationInterface = "0.7"
Expand Down
2 changes: 1 addition & 1 deletion docs/src/general.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Therefore, please refer to the documentation of each different algorithm for a d

## [Monitoring the Objective Value](@id estimate_objective)

Furthermore, each algorithm has an associated variational objective.
Furthermore, each algorithm has an associated variational objective subject to *minimization*. (By convention, we assume all objective are minimized but never maximized.)
The progress made by each optimization algorithm can be diagnosed by monitoring the variational objective value.
This can be done by calling the following method.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/klminnaturalgraddescent.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Since `KLMinNaturalGradDescent` is a measure-space algorithm, its use is restric
KLMinNaturalGradDescent
```

The associated objective value, which is the ELBO, can be estimated through the following:
The associated objective can be estimated through the following:

```@docs; canonical=false
estimate_objective(
Expand Down
2 changes: 1 addition & 1 deletion docs/src/klminsqrtnaturalgraddescent.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Since `KLMinSqrtNaturalGradDescent` is a measure-space algorithm, its use is res
KLMinSqrtNaturalGradDescent
```

The associated objective value, which is the ELBO, can be estimated through the following:
The associated objective value can be estimated through the following:

```@docs; canonical=false
estimate_objective(
Expand Down
2 changes: 1 addition & 1 deletion docs/src/klminwassfwdbwd.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Since `KLMinWassFwdBwd` is a measure-space algorithm, its use is restricted to f
KLMinWassFwdBwd
```

The associated objective value, which is the ELBO, can be estimated through the following:
The associated objective value can be estimated through the following:

```@docs; canonical=false
estimate_objective(
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/basic.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ function callback(; iteration, averaged_params, restructure, kwargs...)

# Higher fidelity estimate of the ELBO on the averaged parameters
n_samples = 256
elbo_callback = estimate_objective(alg, q_avg, model; n_samples)
elbo_callback = -estimate_objective(alg, q_avg, model; n_samples)

(elbo_callback=elbo_callback, accuracy=acc)
else
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/subsampling.md
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ function callback(; iteration, averaged_params, restructure, kwargs...)

# Higher fidelity estimate of the ELBO on the averaged parameters
n_samples = 256
elbo_callback = estimate_objective(alg_full, q_avg, model; n_samples)
elbo_callback = -estimate_objective(alg_full, q_avg, model; n_samples)

(elbo_callback=elbo_callback, accuracy=acc, time_elapsed=time() - time_begin)
else
Expand Down
2 changes: 1 addition & 1 deletion src/AdvancedVI.jl
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@ output(::AbstractVariationalAlgorithm, ::Any) = nothing
"""
estimate_objective([rng,] alg, q, prob; kwargs...)

Estimate the variational objective associated with the algorithm `alg` targeting `prob` with respect to the variational approximation `q`.
Estimate the variational objective subject to be minimized by the algorithm `alg` for approximating the target `prob` with the variational approximation `q`.

# Arguments
- `rng::Random.AbstractRNG`: Random number generator.
Expand Down
9 changes: 2 additions & 7 deletions src/algorithms/abstractobjective.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,7 @@
"""
AbstractVariationalObjective

Abstract type for the VI algorithms supported by `AdvancedVI`.

# Implementations
To be supported by `AdvancedVI`, a VI algorithm must implement `AbstractVariationalObjective` and `estimate_objective`.
Also, it should provide gradients by implementing the function `estimate_gradient`.
If the estimator is stateful, it can implement `init` to initialize the state.
Abstract type for a variational objective to be optimized by some variational algorithm.
"""
abstract type AbstractVariationalObjective end

Expand Down Expand Up @@ -42,7 +37,7 @@ end
"""
estimate_objective([rng,] obj, q, prob; kwargs...)

Estimate the variational objective `obj` targeting `prob` with respect to the variational approximation `q`.
Estimate the minimization objective `obj` of the variational approximation `q` targeting `prob`.

# Arguments
- `rng::Random.AbstractRNG`: Random number generator.
Expand Down
2 changes: 1 addition & 1 deletion src/algorithms/common.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ const ParamSpaceSGD = Union{
"""
estimate_objective([rng,] alg, q, prob; n_samples, entropy)

Estimate the ELBO of the variational approximation `q` against the target log-density `prob`.
Estimate the negative ELBO of the variational approximation `q` against the target log-density `prob`.

# Arguments
- `rng::Random.AbstractRNG`: Random number generator.
Expand Down
2 changes: 1 addition & 1 deletion src/algorithms/klminnaturalgraddescent.jl
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ end
"""
estimate_objective([rng,] alg, q, prob; n_samples)

Estimate the ELBO of the variational approximation `q` against the target log-density `prob`.
Estimate the negative ELBO of the variational approximation `q` against the target log-density `prob`.

# Arguments
- `rng::Random.AbstractRNG`: Random number generator.
Expand Down
2 changes: 1 addition & 1 deletion src/algorithms/klminsqrtnaturalgraddescent.jl
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ end
"""
estimate_objective([rng,] alg, q, prob; n_samples)

Estimate the ELBO of the variational approximation `q` against the target log-density `prob`.
Estimate the negative ELBO of the variational approximation `q` against the target log-density `prob`.

# Arguments
- `rng::Random.AbstractRNG`: Random number generator.
Expand Down
2 changes: 1 addition & 1 deletion src/algorithms/klminwassfwdbwd.jl
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ end
"""
estimate_objective([rng,] alg, q, prob; n_samples)

Estimate the ELBO of the variational approximation `q` against the target log-density `prob`.
Estimate the negative ELBO of the variational approximation `q` against the target log-density `prob`.

# Arguments
- `rng::Random.AbstractRNG`: Random number generator.
Expand Down
2 changes: 1 addition & 1 deletion src/algorithms/repgradelbo.jl
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ function estimate_objective(
)
samples, entropy = reparam_with_entropy(rng, q, q, n_samples, obj.entropy)
energy = estimate_energy_with_samples(prob, samples)
return energy + entropy
return -(energy + entropy)
end

function estimate_objective(obj::RepGradELBO, q, prob; n_samples::Int=obj.n_samples)
Expand Down
2 changes: 1 addition & 1 deletion src/algorithms/scoregradelbo.jl
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ function estimate_objective(
samples = rand(rng, q, n_samples)
ℓπ = map(Base.Fix1(LogDensityProblems.logdensity, prob), eachsample(samples))
ℓq = logpdf.(Ref(q), AdvancedVI.eachsample(samples))
return mean(ℓπ - ℓq)
return -mean(ℓπ - ℓq)
end

function estimate_objective(obj::ScoreGradELBO, q, prob; n_samples::Int=obj.n_samples)
Expand Down
Loading