Skip to content

Commit

Permalink
Markup fixes.
Browse files Browse the repository at this point in the history
  • Loading branch information
tpapp committed Nov 15, 2017
1 parent 061c2e3 commit f5de530
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,34 +11,34 @@ Score function method for gradient of simulated values using automatic different

When `x ∼ F(⋅, β)`, integrals of the form

```
```math
s(β) = Eᵦ[h(x)] = ∫ h(x) dF(x,β)
```

can be estimated by simulating values `xᵢ ∼ F(⋅, β)` from the distribution and taking the mean, ie

```
```math
ŝ(β) = 1/N ∑ᵢ h(xᵢ)
```

We are frequently interested in a Monte Carlo estimator for `∂s/∂θ`. The “score function” trick is to multiply and divide by the density before differentiating, and use
We are frequently interested in a Monte Carlo estimator for `∂s/∂β`. The “score function” trick is to multiply and divide by the density before differentiating, and use

```
```math
∂ŝ(β)/∂β = 1/N ∑ᵢ h(xᵢ) ∂log(f(x, β))/∂β
```

in Monte Carlo simulations. Note that various technical conditions are required for this (you need to be able to exchange the integral and the differentiation operators), see the references below.

This package implements `score_AD` and `score_AD_log` to program these seamlessly using automatic differentiation (currently [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) is supported). You can write, for example,

```
```julia
dist = SomeDistribution(β)
mean(h(x) * score_AD(pdf(dist, β)) for x in xs)
```

or (note the log, which is preferred for numerical reasons)

```
```julia
dist = SomeDistribution(β)
mean(h(x) * score_AD_log(logpdf(dist, β)) for x in xs)
```
Expand Down

0 comments on commit f5de530

Please sign in to comment.