New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ReceptiveField is not pipelineable #5089
Comments
I'm not sure you'll gain much from multiprocessing, the Ridge estimator will very efficiently use your cores anyways. Also couldn't you create a GridSearchCV for a pipeline and then pass the pipeline to |
Are you sure? It's excruciatingly slow on a very fast server.... What I was doing until now to speed it up a notch is start using multiprocessing to run subjects in parallel (I have ~ 30)
I will try your suggestion and let you know, thanks. You're right that TDR is quite a bit faster, but unfortunately I've been getting noticeably better results with Ridge which is why I'm using the latter. Still, this doesn't address the issue that RF does not take the same input data as other MNE estimator. |
|
Uh oh, now you're making me doubt now @larsoner :) I'll double check that Ridge vs TDR results and get back to you asap. |
If you want to see if it's using your cores, you should check CPU usage. |
Btw, as you know, there isn't much to tune with Ridge anyways. You can use RidgeCV to tune internally. |
Currently my pipeline consists of a Z-scorer (according to several papers this should be done), a RF, and a classifier (I have 2 concurrent streams, but this is done in many other papers as well). Or what if I had too many channels and wanted to run a PCA before the RF ? Currently I'm doing all this by hand, but shouldn't I be able to do that all at once in sklearn ? What I was hoping to achieve is something similar to this example: |
@choldgraf thoughts on this pipeline for ReceptiveField idea? |
mmmm, I'm not sure what would be the path forward to making this possible since it's been a while since I've used the code, but I think something like this would be useful for sure, happy to brainstorm w/ folks |
Retrospectively, it appears to me to be very inelegant that the decoding and (implicitly-)encoding functionality expects data to be in different shapes. |
what do you suggest?
|
Maybe add a kwarg "data_orientation" and a 3-release switch that goes: data_orientation="e_t_c" default, warn that it will change to "e_c_t" ? That is, if consistency between decoding and encoding is actually agreed to be useful. Maybe it's not actually inherently desirable. |
I don't have much of an opinion |
Hi all,
I've been hitting a roadblock recently with the
ReceptiveField
module.I'd like to use sklearn CV to find optimal hyperparameters for my data (typical dichotic listening experiment similar to the one in the tutorials :
https://mne-tools.github.io/dev/auto_tutorials/plot_receptive_field.html#compare-model-performance
In that example the crossvalidation and hyper parameter tuning is done "manually" in a for loop. This can be extremely slow as it does't use multiprocessing.
However when I tried using sklearn's GridSearchCV to find the best alpha, but I ran into all kinds of nasty sklearn reshaping errors. It's probably due to the input data format required by
ReceptiveField
(namely(n_epochs, n_times, n_chans)
), which is:(n_chans, n_epochs, n_times)
For instance the code below will fail with a reshape-related error in sklearn :
PS: actually I wanted to pass
'rf__estimator': [Ridge(alpha) for alpha in np.logspace(0, 9, 10)]
but I'm not sure this is correct ?The text was updated successfully, but these errors were encountered: