Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelization of DiscreteDP solvers #261

Open
jstac opened this issue Jul 25, 2016 · 9 comments
Open

Parallelization of DiscreteDP solvers #261

jstac opened this issue Jul 25, 2016 · 9 comments
Labels

Comments

@jstac
Copy link
Contributor

jstac commented Jul 25, 2016

Is it possible to modify the solvers within DiscreteDP so that they can exploit multiple cores? This is a question from a user (Jack Shin) and I'm sure it would be very valuable to users if that could be done transparently (and if they had access to a machine with sufficiently many cores).

Same question goes for the Julia code, come to think of it.

@spencerlyon2 @cc7768 @oyamad Any thoughts?

@jstac jstac added the wishlist label Jul 25, 2016
@sglyon
Copy link
Member

sglyon commented Jul 25, 2016

Hi @jstac this would be very cool.

Off the top of my head, the two most obvious places for parallelism to me are:

  1. Doing the matrix operation in the bellman operator here
  2. Doing s_wise_max here or here

I think the most straightforward way to get this in python would be to figure out how to write those operations in a way where numba could automatically parallelize them for us.

@oyamad
Copy link
Member

oyamad commented Jul 26, 2016

I guess we should fix target usecases. (I have had no experience to try to solve a large size problem.)

  1. If the case is that the user wants to use DiscreteDP.solve many times in a for-loop, s/he might want to parallelize the for-loop (I don't know how). In this case, I am worried about @spencerlyon2 's concern from probvec: Use guvectorize with target='parallel' #253 (comment) (although I am not sure if it applies to this case).
  2. DiscreteDP is constrained by the machine's memory size, due to its design that the R and Q arrays must be prepared in advance. So with a machine with 16GB for example, a single execution of DiscreteDP.solve shouldn't take much time. Going back to the discussion following DiscreteDP: Issues #185 (comment), would Dask help in parallelization here?
  3. We should do profiling first anyway. I have some benchmarks here.

@jstac
Copy link
Contributor Author

jstac commented Aug 27, 2017

I agree with the need to profile.

Question: Can we automate parallelization of the max step in application of T or computation of greedy policies and get significant gains through the jit compiler? See In[23] of this notebook:

http://nbviewer.jupyter.org/gist/natashawatkins/2ba8acca8dde831f4cafc09b9990b91c

(Thanks to @natashawatkins)

The gains there are large. But this would need to be tested on a variety of input data.

@oyamad
Copy link
Member

oyamad commented Aug 28, 2017

We may try Numba's new parallelization technology.

@jstac
Copy link
Contributor Author

jstac commented Aug 28, 2017

Yes, good point.

In my understanding this is still experimental, whereas the parallel flag on @vectorize is already standard.

@oyamad
Copy link
Member

oyamad commented Aug 28, 2017

For this function we can compare @guvectorize and prange. For the other functions I don't know how to use @guvectorize.

@zhoujianberkeley
Copy link

Just to check, is the parallelization already finished or still open to contribution?

@jstac
Copy link
Contributor Author

jstac commented Oct 16, 2020

As far as I know this is still an open issue and we'd love to see it pushed forward. Thanks for your interest @zhoujianberkeley .

@oyamad / @mmcky, do you know if any work has been done on this?

@oyamad
Copy link
Member

oyamad commented Oct 17, 2020

No part has been parallelized. Thanks for your interest @zhoujianberkeley!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants