Skip to content

Commit

Permalink
Fix some typos (#1974)
Browse files Browse the repository at this point in the history
Signed-off-by: Alexander Seiler <seileralex@gmail.com>
  • Loading branch information
goggle committed Apr 4, 2023
1 parent cd5dd1a commit 3e71a76
Show file tree
Hide file tree
Showing 8 changed files with 10 additions and 10 deletions.
2 changes: 1 addition & 1 deletion docs/src/contributing/style-guide.md
Expand Up @@ -454,7 +454,7 @@ When referencing Julia in documentation note that "Julia" refers to the programm

```julia

# A commment
# A comment
code

# Another comment
Expand Down
2 changes: 1 addition & 1 deletion docs/src/for-developers/interface.md
Expand Up @@ -61,7 +61,7 @@ The full code for this implementation is housed in

### Imports

Let's begin by importing the relevant libraries. We'll import `AbstracMCMC`, which contains
Let's begin by importing the relevant libraries. We'll import `AbstractMCMC`, which contains
the interface framework we'll fill out. We also need `Distributions` and `Random`.

```julia
Expand Down
2 changes: 1 addition & 1 deletion docs/src/for-developers/variational_inference.md
Expand Up @@ -242,7 +242,7 @@ This means that if we have a differentiable bijection ``f: \mathrm{supp} \left(
\mathbb{P}\_p(x \in A) = \int\_{f^{-1}(A)} p \left(f^{-1}(y) \right) \ \left| \det \mathcal{J}\_{f^{-1}}(y) \right| \mathrm{d}y,
```

where ``\mathcal{J}_{f^{-1}}(x)`` denotes the jacobian of ``f^{-1}`` evaluted at ``x``. Observe that this defines a probability distribution
where ``\mathcal{J}_{f^{-1}}(x)`` denotes the jacobian of ``f^{-1}`` evaluated at ``x``. Observe that this defines a probability distribution

```math
\mathbb{P}\_{\tilde{p}}\left(y \in f^{-1}(A) \right) = \int\_{f^{-1}(A)} \tilde{p}(y) \mathrm{d}y,
Expand Down
4 changes: 2 additions & 2 deletions src/inference/hmc.jl
Expand Up @@ -288,7 +288,7 @@ Arguments:
- `n_adapts::Int` : Numbers of samples to use for adaptation.
- `δ::Float64` : Target acceptance rate. 65% is often recommended.
- `λ::Float64` : Target leapfrog length.
- `ϵ::Float64=0.0` : Inital step size; 0 means automatically search by Turing.
- `ϵ::Float64=0.0` : Initial step size; 0 means automatically search by Turing.
For more information, please view the following paper ([arXiv link](https://arxiv.org/abs/1111.4246)):
Expand Down Expand Up @@ -356,7 +356,7 @@ Arguments:
- `δ::Float64` : Target acceptance rate for dual averaging.
- `max_depth::Int` : Maximum doubling tree depth.
- `Δ_max::Float64` : Maximum divergence during doubling tree.
- `init_ϵ::Float64` : Inital step size; 0 means automatically searching using a heuristic procedure.
- `init_ϵ::Float64` : Initial step size; 0 means automatically searching using a heuristic procedure.
"""
struct NUTS{AD,space,metricT<:AHMC.AbstractMetric} <: AdaptiveHamiltonian{AD}
Expand Down
2 changes: 1 addition & 1 deletion test/inference/ess.jl
Expand Up @@ -57,7 +57,7 @@

# Different "equivalent" models.
# NOTE: Because `ESS` only supports "single" variables with
# Guassian priors, we restrict ourselves to this subspace by conditioning
# Gaussian priors, we restrict ourselves to this subspace by conditioning
# on the non-Gaussian variables in `DEMO_MODELS`.
models_conditioned = map(DynamicPPL.TestUtils.DEMO_MODELS) do model
# Condition on the non-Gaussian random variables.
Expand Down
2 changes: 1 addition & 1 deletion test/inference/hmc.jl
Expand Up @@ -20,7 +20,7 @@

check_numerical(chain, [:p], [10/14], atol=0.1)
end
@numerical_testset "contrained simplex" begin
@numerical_testset "constrained simplex" begin
obs12 = [1,2,1,2,2,2,2,2,2,2]

@model function constrained_simplex_test(obs12)
Expand Down
2 changes: 1 addition & 1 deletion test/stdlib/distributions.jl
Expand Up @@ -39,7 +39,7 @@
multi_dim = 4
# 1. UnivariateDistribution
# NOTE: Noncentral distributions are commented out because of
# AD imcompatibility of their logpdf functions
# AD incompatibility of their logpdf functions
dist_uni = [
Arcsine(1, 3),
Beta(2, 1),
Expand Down
4 changes: 2 additions & 2 deletions test/test_utils/random_measure_utils.jl
Expand Up @@ -4,12 +4,12 @@ function compute_log_joint(observations, partition, tau0, tau1, sigma, theta)
prob = k*log(sigma) + lgamma(theta) + lgamma(theta/sigma + k) - lgamma(theta/sigma) - lgamma(theta + n)
for cluster in partition
prob += lgamma(length(cluster) - sigma) - lgamma(1 - sigma)
prob += compute_log_conditonal_observations(observations, cluster, tau0, tau1)
prob += compute_log_conditional_observations(observations, cluster, tau0, tau1)
end
prob
end

function compute_log_conditonal_observations(observations, cluster, tau0, tau1)
function compute_log_conditional_observations(observations, cluster, tau0, tau1)
nl = length(cluster)
prob = (nl/2)*log(tau1) - (nl/2)*log(2*pi) + 0.5*log(tau0) + 0.5*log(tau0+nl)
prob += -tau1/2*(sum(observations)) + 0.5*(tau0*mu_0+tau1*sum(observations[cluster]))^2/(tau0+nl*tau1)
Expand Down

0 comments on commit 3e71a76

Please sign in to comment.