You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a function that is going slow and I would like to figure out what is holding it up. I have profiled the underlying non-parallel function and it goes about as fast as it can on data of a size which it can handle. However, it really needs to work on blocks of data in parallel to get done in a reasonable amount of time and to work on data of any significance. So, I really need to profile it while it is running in parallel with ipyparallel. Are there any suggested ways to go about this? Is it possible to use some existing tool like line_profiler? If not, what other tools might be look at? Does ipyparallel have any tricks for doing this sort of thing?
The text was updated successfully, but these errors were encountered:
I have a function that is going slow and I would like to figure out what is holding it up. I have profiled the underlying non-parallel function and it goes about as fast as it can on data of a size which it can handle. However, it really needs to work on blocks of data in parallel to get done in a reasonable amount of time and to work on data of any significance. So, I really need to profile it while it is running in parallel with
ipyparallel
. Are there any suggested ways to go about this? Is it possible to use some existing tool likeline_profiler
? If not, what other tools might be look at? Doesipyparallel
have any tricks for doing this sort of thing?The text was updated successfully, but these errors were encountered: