You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are certain cases where the efficiency of HMC sampler can be substantially improved, such as the following example
@model nbmodel begin
theta ~Dirichlet(alpha)
phi =Array{Any}(K)
for k =1:K
phi[k] ~Dirichlet(β)
endfor m =1:M
z[m] ~Categorical(theta)
endfor n =1:N
w[n] ~Categorical(phi[z[doc[n]]])
end
phi
end
If we introduce a vectorising notation, the likelihood computation step can be re-written as
@model nbmodel begin
theta ~Dirichlet(alpha)
phi =Array{Any}(K)
for k =1:K
phi[k] ~Dirichlet(β)
end
z ~Categorical(theta) # z[m] ~ Categorical(theta)
w ~ [Categorical(phi[z[doc[n]]) for n=1:n] # w[n] ~ Categorical(phi[z[doc[n]]])
phi
end
The reason that vectorising code can be more efficient is due to improved automatic differentiation and inefficiency associated with loops.
There are certain cases where the efficiency of HMC sampler can be substantially improved, such as the following example
(The full model can be found here)
If we introduce a vectorising notation, the likelihood computation step can be re-written as
The reason that vectorising code can be more efficient is due to improved automatic differentiation and inefficiency associated with loops.
Reference: Stan documentation (see the section 4.2)
The text was updated successfully, but these errors were encountered: