Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi-threaded (@threads) dotcall/broadcast? #19777

Open
stevengj opened this issue Dec 30, 2016 · 24 comments
Open

multi-threaded (@threads) dotcall/broadcast? #19777

stevengj opened this issue Dec 30, 2016 · 24 comments
Labels
domain:broadcast Applying a function over a collection domain:multithreading Base.Threads and related functionality kind:speculative Whether the change will be implemented is speculative

Comments

@stevengj
Copy link
Member

It would be nice to be able to put @threads in front of a dot call, e.g. @threads X .= f.(Y), and have it call a multi-threaded version of broadcast (which would assume f is thread-safe).

@stevengj stevengj added domain:multithreading Base.Threads and related functionality domain:broadcast Applying a function over a collection labels Dec 30, 2016
@vchuravy
Copy link
Sponsor Member

vchuravy commented Dec 31, 2016

I don't think this is something that should go into the @threads macro (since that is already too complex). @SimonDanisch has been working on doing this based on dispatch to a special array type in https://github.com/JuliaGPU/GPUArrays.jl/blob/master/src/backends/julia/julia.jl

The beauty of Simon's approach is that it is general and works for distributed arrays, GPU arrays, and native arrays. I consed that it might feel a bit unnatural for native arrays, and I wouldn't be opposed to add a macro that transforms X .= f.(Y) into the threaded dispatch version but I don't think that necessarily needs to be in Base.

@vchuravy vchuravy added the kind:speculative Whether the change will be implemented is speculative label Dec 31, 2016
@stevengj
Copy link
Member Author

Regardless of the name of the macro, it would be nice to have something that just involved a decorator and didn't involve re-allocating or wrapping all of your arrays in some other type. This is the big distinction between threads and GPUs or distributed-memory — with threads, you don't need to decide in advance to put your data on a GPU or in another process.

@SimonDanisch
Copy link
Contributor

I'd suggest having a macro similar to fastmath, that replaces the broadcast calls with e.g. threaded_broadcast. Would be nice to share the implementation between what I have and Base :)
I tried to create a minimal working version, but it seems like expand needs some extra attention to get the broadcast back.
Blocks of code expand into a Expr(:thunk, Toplevel LambdaInfo thunk), so I had to iterate through the sub expressions to make things easier.

@StefanKarpinski
Copy link
Sponsor Member

Would it be too crazy for broadcast to be implicitly parallel once threading is stable?

@martinholters
Copy link
Member

Already, broadcast makes no guaranties concerning the order or exact number of times the given function is called (think sparse matrices), so passing non-pure functions is questionable anyway. That would be in favor of implicit parallelism. Con: When broadcasting over a collection with only few items, the complexity of the given function will decide whether it is worth any threading overhead, which cannot be easily decided by broadcast itself.

@ChrisRackauckas
Copy link
Member

Could this get a 1.0 milestone? At least some kind of macro would be very useful, if not implicit parallelism (or a mixture: default implicit parallelism, which can be overridden with a macro). Since broadcasting is such a cool feature, this would really complete the story.

@StefanKarpinski StefanKarpinski added this to the 1.0 milestone Feb 9, 2017
@Sacha0
Copy link
Member

Sacha0 commented Feb 9, 2017

cc @lkuper

@vchuravy
Copy link
Sponsor Member

If anyone wants to tackle this (developing this outside of base at first is probably a good idea) my roadmap/ideas would be.

  1. Thin wrapper type around Arrays
  2. Macro that automates this wrapping
  3. Co-existence with other types of parallelism. We currently have DistributedArrays as a seconder user of broadcast
    The nice thing about using a thin wrapper is that you can have DArray{Threaded{Array}} to combine distributed and threaded parallelism.
  4. What happens in the case of mixed accelerated array types? e.g. A GPUArray meets a threaded array?
  5. NUMA-aware parallelism. Communication across NUMA nodes is still quite expensive and memory should probably be pinned to the NUMA group that the threads belong to.

For other inspiration take a look at parallel collections in Scala http://docs.scala-lang.org/overviews/parallel-collections/overview.html (which were the inspiration for Java 8).

@SimonDanisch
Copy link
Contributor

Just as an info: I'm already getting good speed ups out of @threads in my broadcast implementation in GPUArrays ;) It's still a bit crappy, but if I have some time I could polish it up and I can make a first PR for a tbroadcast or however we call it ;)

@RoyiAvital
Copy link

@ChrisRackauckas ,

I like the idea of letting the user choose.
Should be something like a Macro (Or the most efficient way to do it) with 3 states:

  1. Auto (Default) - Adding heuristic to set On or OFF.
  2. ON.
  3. OFF.

Thank You.

@ChrisRackauckas
Copy link
Member

It would be interesting if the heuristic could be applied to arbitrary loops via some macro as well.

@stevengj
Copy link
Member Author

stevengj commented Feb 11, 2017

@ChrisRackauckas, parallelizing arbitrary loops is precisely what the @threads macro does, no? I'm simply proposing using it for broadcast (and dot calls) as well.

@vchuravy, I'm not sure I like the idea of a special array type, vs. just a @threads decorator on looping constructs. Not needing a special container type is one of the main attractions of shared-memory parallelism.

@ChrisRackauckas
Copy link
Member

ChrisRackauckas commented Feb 11, 2017

@ChrisRackauckas, parallelizing arbitrary loops is precisely what the @threads macro does, no? I'm simply proposing using it for broadcast (and dot calls) as well.

I was asking if there could be a way to apply whatever implicit parallelism heuristic to a loop. Essentially a macro for "multithread this if the size of the array is greater than x" or whatever is involved in the heuristic, and have the options be tweakable. Then broadcast would just be essentially applying that with the defaults. Your proposal just has a macro, but I'm wondering if implicit parallelism can be added as well.

Otherwise I could see applications wanting to have a bunch of conditionals to check if multithreading should be ran? That last part is dependent on the overhead of multithreading (which I found to be measurable in many small problems, but the benchmarks may be mixed up with #15276 issues).

@stevengj
Copy link
Member Author

stevengj commented Feb 11, 2017

@ChrisRackauckas, it's not the size of the array, but the expense of the loop iterations that matters. There's also the issue of load balancing if the loop iterations have unequal cost. I agree that you want to automate this (both deciding how many threads to use and how to load balance) to the extent possible. My understanding is that Cilk (and subsequently OpenMP) mostly solved this issue.

Anyway, I see that as orthogonal to this issue.@threads f.(x) should use the same machinery as @threads for. Improvements in the latter should help the former.

@Sacha0
Copy link
Member

Sacha0 commented Feb 11, 2017

Ref. #18278 (comment)

@pabloferz
Copy link
Contributor

Related #1802 (or is this a duplicate?)

@RoyiAvital
Copy link

@stevengj , There is a lot to consider whether or not to multi thread a loop.
Hence I just hope Julia will also allow the user to set it OFF or ON on his choice.

@jebej
Copy link
Contributor

jebej commented Apr 11, 2017

It would be nice to initially have a simple-case version of the macro as explained in the first post (for cases like cos.(x)) for all the people on the mailing list complaining that it is slower than MATLAB. This would make it really easy to answer. Right now people need to make a special function to see the difference:

function threadedcos(x::AbstractArray)
  out = similar(x)
  Threads.@threads for i in eachindex(x)
    out[i]=cos(x[i])
  end
  return out
end

@vchuravy
Copy link
Sponsor Member

using GPUArrays you can automatically accelerate broadcast.

using GPUArrays
GPUArrays.init(:julia) # Otherwise an OpenCL or CUDA backend might be used
a = GPUArray(rand(Float32, 32, 32))
cos.(a)

@JeffBezanson JeffBezanson modified the milestones: 1.x, 1.0 May 2, 2017
@bramtayl
Copy link
Contributor

I wonder if it wouldn't be possible to have some sort of option to switch out which function is used for . broadcasting. Something like ENV[:dotfunction] = threaded_broadcast

@Sacha0
Copy link
Member

Sacha0 commented Jul 15, 2017

I wonder if it wouldn't be possible to have some sort of option to switch out which function is used for . broadcasting. Something like ENV[:dotfunction] = threaded_broadcast

Ref. the discussion around #16285 (comment). Best!

@yuyichao
Copy link
Contributor

Do note that this should not be the default or a global setting, not before we require every functions to be threadsafe at least. No guarantee on execution order is a very weak requirement compare to thread safe.

@MasonProtter
Copy link
Contributor

It should probably be noted here for those looking for this feature that such a macro has been implemented intoStrided.jl.

@oscardssmith
Copy link
Member

How hard would this be to do? It would be really nice to get a ~10x speedup with a macro for lots of the easy cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
domain:broadcast Applying a function over a collection domain:multithreading Base.Threads and related functionality kind:speculative Whether the change will be implemented is speculative
Projects
None yet
Development

No branches or pull requests