Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel Sequential Monte Carlo #6

Open
yebai opened this issue Nov 30, 2016 · 15 comments
Open

Parallel Sequential Monte Carlo #6

yebai opened this issue Nov 30, 2016 · 15 comments
Assignees

Comments

@yebai
Copy link
Member

yebai commented Nov 30, 2016

I looked into the newly released Threads.@threading construct in Julia 0.5. It seems that adding parallelization feature to SMC is simple - could be done in a few lines of code (see the parallelsmc branch).

However, the threading feature in Julia is still quite fragile (see e.g. here). I've also filed a bug in the Julia repo.

Maybe we should wait until the Julia team fix these threading bugs and revisit to this feature in a few months time.

@emilemathieu
Copy link
Contributor

@yebai: I've reproduced the code on the issue JuliaLang/julia#19450 you opened, and I do not get any segfault. Yet, it seems that there is still this issue JuliaLang/julia#10441.

I can't find the parallelsmcbranch, do you still have the code ?

@emilemathieu
Copy link
Contributor

The solution posted here https://discourse.julialang.org/t/how-do-i-deal-with-random-number-generation-when-multithreading/5636/2 seems to be a good solution to deal with random numbers while using@threads.

@yebai
Copy link
Member Author

yebai commented Oct 9, 2017

I can't find the parallelsmcbranch, do you still have the code?

Thanks for investigating into this. Please find my code in the following branch:

https://github.com/yebai/Turing.jl/tree/hg/parallelsmc

Ps. it might be broken since it was written long before.

@trappmartin
Copy link
Member

trappmartin commented Jun 20, 2018

I just implemented a version which seems to work on branch multithreaded-PG and will have a further look into this issue.

See: https://github.com/TuringLang/Turing.jl/blob/multithreaded-PG/src/core/container.jl#L152-L180

@emilemathieu
Copy link
Contributor

FYI I implemented a few months ago a parallel version of IPMCMC: https://github.com/emilemathieu/Turing.jl/blob/7c72a238b4d278720409d845880738d5d2c44ed3/src/samplers/ipmcmc.jl#L70

@trappmartin
Copy link
Member

Great! I’ll have a look at it.

@trappmartin
Copy link
Member

The code on branch https://github.com/TuringLang/Turing.jl/tree/multithreaded-PG now implements a multi-threaded version of the ParticleContainer and an adaptation of the code in https://github.com/emilemathieu/Turing.jl/blob/7c72a238b4d278720409d845880738d5d2c44ed3/src/samplers/ipmcmc.jl#L70 for Julia 0.6. A test for the distributed IPMCMC is currently failing, see below.

As far as I understand the compiler code, it seems that the compiler currently does not generate the inner callback functions, *_model, on all processes. I'll, therefore, have a deeper look at the compiler implementation.

@emilemathieu Did you encounter the same issue?
cc @yebai

@emilemathieu
Copy link
Contributor

Great work !
I'm not sure to completely understand what you mean by inner callback functions, *_model.
I did not encounter such an issue. The only "tricky" part I had was that within @parallel, objects defined outside the scope of the loop couldn't be updated. Thus I had to explicitly return the objects needed to be updated within the loop.

@trappmartin
Copy link
Member

If I'm running a simple example to test the parallel implementation, e.g.

addprocs(1)

@everywhere using Turing
srand(125)
x = [1.5, 2]

@everywhere @model gdemo(x) = begin
	s ~ InverseGamma(2, 3)
	m ~ Normal(0, sqrt(s))
	for n in 1:length(x)
		x[n] ~ Normal(m, sqrt(s))
	end
	s, m
end

inference = IPMCMC(30, 500, 4)
chain = sample(gdemo(x), inference)

, I get an error that ###gdemo_model#700 is not defined on the second worker. Am I'm doing something wrong here?

Thanks!

@yebai
Copy link
Member Author

yebai commented Aug 14, 2018

@trappmartin You need to use @everywhere in all places, i.e.

addprocs(1)

@everywhere using Turing
srand(125)
@everywhere x = [1.5, 2]

@everywhere @model gdemo(x) = begin
	s ~ InverseGamma(2, 3)
	m ~ Normal(0, sqrt(s))
	for n in 1:length(x)
		x[n] ~ Normal(m, sqrt(s))
	end
	s, m
end

@everywhere inference = IPMCMC(30, 500, 4)
@everywhere mf = gdemo(x)
chain = sample(mf, inference)

Supporting paralliesm using processes is a pain since it involves data transfer. It's probably better to stick with threads for now until we have a cleaner and more organised code base.

@trappmartin
Copy link
Member

Right, thanks for the tip!

I wanted to have both supported so that one can use multi-threading locally and also distributed computation if necessary. Unfortuanitelly, I'm currently having a broken test for the distributed code and see a similar problem as in TuringLang/Turing.jl#463 even though the non-distributed code works fine. Working on it...

@mohamed82008
Copy link
Member

For shared-memory parallelism, I suggest making use of KissThreading.jl https://github.com/bkamins/KissThreading.jl which is shared-memory parallelism free of the closure bug hassle. That package is not registered yet, but I can work on getting it registered if it turns out to be useful here. GPU support is probably also worth considering at some point, but that's a much larger commitment.

@yebai yebai transferred this issue from TuringLang/Turing.jl Dec 17, 2019
@pcjentsch
Copy link

Is multiprocessing SMC (that is, not multithreading) supported still? I see some mentions here but it's difficult to find documentation.

yebai pushed a commit that referenced this issue Sep 25, 2021
@bgroenks96
Copy link

I'll ask also, what is the current status of this? The current IPMCMC implementation doesn't look very parallel... unless I am missing something.

@yebai
Copy link
Member Author

yebai commented Oct 23, 2023

@FredericWantiez we can finally re-visit this functionality based on SSMProblems...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants