You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However, my cores do not seem to run whenever I use n_jobs=2 or higher in Parallel; my python notebook cell just hangs, never completes and my processors are not used. At n_jobs=1, everything is running fine. What am I missing? Would I better use polylearn instead for this kind of task?
The text was updated successfully, but these errors were encountered:
Ok it appears that passing this session_config parameter does not do anything, at least, on my machine. So, it is not needed
I was able to use Parallel in the end by:
passing a numpy.float32 value for init_std (otherwise I sometime got an error)
standardizing the values in X to $|x_{ij}| <= 1$ (otherwise I got an error when using a high order)
It appears that passing a numpy.float32 value is not necessary the parameter reg.
Now, I know that my question is highly problem dependent but what are good range of values when doing a randomized search over the different parameters? My problem contains about 400 observations and I see that there are several parameters that could be tuned such as order, rank, optimizer (and its parameters), reg, etc. It even appears that batch_size and n_epochs are somewhat linked.
I need to run multiple
TFFMRegressor
objects injoblib
Parallel
. To do so, I passed the following parameter:However, my cores do not seem to run whenever I use
n_jobs=2
or higher inParallel
; my python notebook cell just hangs, never completes and my processors are not used. Atn_jobs=1
, everything is running fine. What am I missing? Would I better usepolylearn
instead for this kind of task?The text was updated successfully, but these errors were encountered: