New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
restored previous tests behavior #3
Conversation
… returns mocked values
This fixes all the tests failing in When it comes to |
@sliwy There is a case for |
how about assumption that the second item returned by |
But we can not load the whole dataset just to get all possible values for |
yeah that's true |
Maybe we can just make two mock model classes? one for the old tests that returns pred and one for the new ones that can be trained |
I tried to provide
|
maybe we should use second thing, if we do not support plain |
I don’t think we should use |
My philosophy was to fill all the parameters I can and ignore the others |
philosophy understood 😃 , there could be code that worked with plain |
that' what I'm worrying about but maybe it's not so important |
I don’t think it should be the case because currently all examples provide n_outputs or the old n_classes to the model |
Are you ok to do 2 different mock model classes in the test? |
I'm trying to provide |
and for some reason I'm receiving a strange error that was not there before:
but there is no layer that has kernel size of 30 😅 |
@PierreGtch do you have any idea why it's like that:
it looks like during training the |
If the model was passed initialised to the EEGClassifier, then it was probably re-initialized. I think we should just raise an issue if an initialised model is passed to EEGClassifier or EEGRegressor |
To me it's not the way we should go, at least not without deprecation and raising errors when nn.Module is provided. There is no example that use class as parameter to EEGClassifier, so it is breaking API in a way that I would not expect. But maybe we can fix that in the code - Why do we filter the kwargs only to |
okey, I see, I used |
Passing initialised models to skorch is not a good practice and not recommended: https://skorch.readthedocs.io/en/stable/user/neuralnet.html#module I would be in favour of not supporting initialised models and raising an error. We did not discuss that during the sprint. @bruAristimunha, @robintibor what do you think? |
I agree! it's just that I think we should take this decision consciously and not by the way :) I think after changes it will be easier to use it with skorch and scikit-learn API overall. However, we will need some additional changes
Additional point: |
b35dbdd
into
PierreGtch:auto-signal-params
Instead of performing any computations in test we always return mocked values of
self.preds