-
Notifications
You must be signed in to change notification settings - Fork 215
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Match AdvancedPS 0.4 API #1858
Match AdvancedPS 0.4 API #1858
Conversation
2006ba0
to
b9757e1
Compare
Simple fix that I saw.
Still running into one annoying issue:
Which is caused by this Turing.jl/src/inference/Inference.jl Lines 446 to 452 in 5990fae
and the new logic in |
I think we don't need |
The issue here is that
|
But we have the content of Boxes deep or shallow copied here accordingly: https://github.com/TuringLang/Libtask.jl/blob/master/src/tapedfunction.jl#L479 Is it possible to wrap an MWE to reproduce this issue? |
But I'm not sure the tape_copy mechanism works as expected for boxed values: tape_shallowcopy(x::Core.Box) = Core.Box(tape_shallowcopy(x.contents))
tape_deepcopy(x::Core.Box) = Core.Box(tape_deepcopy(x.contents))
function _tape_copy(v, deepcopy_types)
if isa(v, deepcopy_types)
tape_deepcopy(v)
else
tape_shallowcopy(v)
end
end since we dispatch on |
Here's an example that fails on this branch using Random
using Turing
using Turing.RandomMeasures
using Libtask
using AdvancedPS
using DynamicPPL
@model function infiniteGMM(x)
# Hyper-parameters, i.e. concentration parameter and parameters of H.
α = 1.0
μ0 = 0.0
σ0 = 1.0
# Define random measure, e.g. Dirichlet process.
rpm = DirichletProcess(α)
# Define the base distribution, i.e. expected value of the Dirichlet process.
H = Normal(μ0, σ0)
# Latent assignment.
z = zeros(Int, length(x))
# Locations of the infinitely many clusters.
μ = zeros(Float64, 0)
for i in 1:length(x)
# Number of clusters.
K = maximum(z)
nk = Vector{Int}(map(k -> sum(z .== k), 1:K))
#println("i: $i, K: $K, Mu $(μ) $([objectid(μ)]) z: $z, $([objectid(z)]) $(typeof(z)) $(typeof(μ))")
# Draw the latent assignment.
z[i] ~ ChineseRestaurantProcess(rpm, nk)
# Create a new cluster?
if z[i] > K
push!(μ, 0.0)
# Draw location of new cluster.
μ[z[i]] ~ H
end
# Draw observation.
x[i] ~ Normal(μ[z[i]], 1.0)
end
end
# Generate some test data.
Random.seed!(2);
m = 3
data = vcat(randn(5), randn(10) .- 5, randn(10) .+ 10);
data .-= mean(data);
data /= std(data);
model_fun = infiniteGMM(data);
# MCMC sampling
Random.seed!(54);
iterations = 300;
chain = sample(model_fun, SMC(), iterations; progress=false)
|
Pull Request Test Coverage Report for Build 3624437376
💛 - Coveralls |
Codecov ReportBase: 81.24% // Head: 0.00% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #1858 +/- ##
==========================================
- Coverage 81.24% 0.00% -81.25%
==========================================
Files 21 21
Lines 1418 1432 +14
==========================================
- Hits 1152 0 -1152
- Misses 266 1432 +1166
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
I'll add some tests but this is mostly working now, still not quite sure why it fails on convergence for Ubuntu/Julia1.7 but not on the other images. |
Seems like it's just a difference in RNGs, no? As in, it's getting very close to the correct result but just outside of the allowed tolerance. So try increasing iterations/particles for this particular test. |
Agree, but if everything is seeded explicitly it shouldn't vary across images ? |
…g.jl into fred/advancedps_release
@devmotion Are you happy with the changes in this PR? It (hopefully temporarily, see below) dropped support for Julia 1.6. It is largely due to Libtask, which only supports Julia 1.7 and above -- substantial efforts are required to support Julia 1.6 due to major Julia compiler changes. In addition, @FredericWantiez and I discussed moving the general-purpose particle Gibbs sampler implementation (requires Libtask) into an experimental namespace and loading |
Just saw the comment now (I'm very busy until Christmas and not too responsive on Github and Slack these days): An unfortunate consequence of the Julia compat change is also that we can't keep using Julia LTS in TuringTutorials (by doing so we avoid having to update the whole repo every time a new Julia version is available).
I assume you're talking about AdvancedPS here? If one can split the package cleanly in a part with and without Libtask that might be a good idea. Maybe one could move it to a subpackage (e.g., AdvancedPSLibtask) in the AdvancedPS repo to make the separation even clearer but avoid the friction of having to update two separate repos all the time.
If the dependencies of AdvancedPS are manageable, I wonder if it would be better to define the problem type there together with the algorithms. Otherwise it seems it is easy to introduce circular dependencies in the tests. Maybe an alternative could be to make it a subpackage, again to avoid having to re-iterate between both repos too much. |
Personally I'm really not a big fan of dropping support for Julia 1.6. Julia 1.6 is still heavily used by Turing users, so this bump is very excluding. |
For example, we are in the process of bumping Setfield compat entry to 1 in Turing deps. Now this change won't benefit people using Julia 1.6. |
Yes, I agree. I think support for Julia LTS is a good common ground throughout the ecosystem in general. It seems quite annoying for people that can't update Julia as easily (or don't want to, e.g., to avoid hunting bugs or performance regressions with newer versions). Since so many packages depend on each other (which is a good thing in general IMO) it, bumping the Julia compat can have a much larger effect than originally anticipated (see, e.g., also tpapp/DynamicHMC.jl#172). |
Would it be possible to revert this change @yebai ? I have some PRs I'd like to make, but basing it on the current master worries me. EDIT: Btw, this PR is great @FredericWantiez ! This has absolutely nothing to do with your work. |
Yes, we plan to make the dependency on |
I agree the decoupling is a good idea, but decoupling can be done without
merging this PR; you can make a PR to a non-master branch. This is probably
the better route as it does not hold back other updates to Turing.
…On Fri, Dec 9, 2022, 5:43 PM Hong Ge ***@***.***> wrote:
Yes, we plan to make the dependency on Libtask optional. That would allow
us to support Julias 1.6 again. I merged this PR because it's better to
decouple from Libtask in a separate PR.
—
Reply to this email directly, view it on GitHub
<#1858 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUPZZDXKVX2ZG7WPLZUK2LWMNOSHANCNFSM53DOR3OQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Do we have any updates on this? There are people asking in the slack channel regarding why we dropped support for 1.6 |
Tweaking Turing to work with the new interface of
AdvancedPS
Closes #1903 Closes #1901 Closes #1902