Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dtreduce #137

Open
orenbenkiki opened this issue Jan 8, 2020 · 1 comment
Open

dtreduce #137

orenbenkiki opened this issue Jan 8, 2020 · 1 comment

Comments

@orenbenkiki
Copy link

I just opened JuliaLang/Distributed.jl#67 to ask for better Julia support for a scenario where one has multiple worker processes, where each has multiple threads.

In such a scenario, it would be useful to have a dtreduce function which works across all the threads across all the worker processes. This would be different from dreduce which only uses one thread in each worker process.

It would probably be advisable to keep the current behavior of dreduce to keep it compatible with @distributed and pmap. Keeping the ability to have one per-process reducers is usefull, for example if the invoke @threads internally.

@tkf
Copy link
Member

tkf commented Jan 8, 2020

This is already implemented in JuliaLang/julia#133. It'll be in the next release. You can pass threads_basesize = typemax(Int) if you don't want to use threads.

Keeping the ability to have one per-process reducers is usefull, for example if the invoke @threads internally.

This is not necessary because Julia's scheduler is depth-first: https://julialang.org/blog/2019/07/multithreading If you use @threads inside (e.g.) f of Map(f) then it will automatically use multiple cores.

Though this may be a good idea if you use external libraries that are not aware of Julia's scheduler. JuliaLang/julia#32786

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants