-
Notifications
You must be signed in to change notification settings - Fork 195
Fix predict in cropped mode, changed behavior of predict_proba #171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… for cropped decoding
…predict tests for EEGClassifier
Ah one thing why did you name it "test_unit_eeg_classifier.py" should be "test_eeg_classifier.py" |
In case it was because of pytest issue, fix in that way: pytest-dev/pytest#3151 (comment) |
@robintibor yes, it was because of that pytest issue; I added init.py to all folders with tests and changed tests' names |
@@ -185,3 +185,42 @@ def _default_callbacks(self): | |||
), | |||
("print_log", PrintLog()), | |||
] | |||
|
|||
def predict_proba(self, X): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to me it does not make sense to have a predict_proba for a regressor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
predict_proba
is already implemented in skorch, so this changes the behavior in cropped mode to return averaged predictions per trial not raw predictions per crop. After the change it would be consistent with EEGClassifier
predict_proba
and predict
behavior.
I don't get it. What is predict_proba outputting for a regressor? is it a
confusion with what sklearn calls decision_function?
… |
@agramfort I don't think that it has any connection with sklearn decision_function. It's rather a skorch thing. Our At this moment, |
predict_proba for a regressor is for me evil. Maybe @thomasjpfan can shed some light on this decision in skorch? |
@agramfort just to clear it a little bit more, |
Maybe @BenjaminBossan would have insight into why |
There is no particular reason, really. It is present on In case you'd rather not have the method, you could declare it a |
I am not a skorch dev but exposing predict_proba for a regressor sounds
really weird.
At least here in braindecode I would vote to not expose this.
… |
So I see two options:
I think 2. is simply more practical now what do you think @sliwy ? |
@robintibor I agree with you, it's simpler. If we merge it we will have the same API as skorch. Both predict and predict_proba will work properly for cropped and trialwise decoding. On the other hand, raising |
be pragmatic but just add maybe a clear docstring and comment in the code
to explain why it is like this. It can be very confusing
… |
yeah can you add such a comment @sliwy ? that would be great, then I would merge. |
done @robintibor, I added a comment with an explanation why we implement |
amazing. Since previous commits were fine on travis and now only acceptance tests fail again, have to assume it is another library change (skorch?), therefore merging.. thanks for your work!! :) |
Resolves #157
As reported in #135 we had a bug in cropped decoding mode, so we did not return values that corresponded to prediction output of our models. Averaging over crops was missing before returning predictions so
predict
returned argmax over wrong axis.I've found two possible options to fix that:
predict_proba
return raw predictions (n_examples x n_classes x n_crops) and give users possibility to aggregate crops by themselves. Do averaging over crops inpredict
method.predict_proba
return average of all crops (n_examples x n_classes). This has at least correct size when interpreting as probabilities (we don't assure that those are valid probabilities, just output of torch Module and may need some nonlinearity at the end). If a user would like to use predictions per crop than he/she needs to use skorchforward
method. I assume that this is for more advanced users.I decided to use second option as it sounds to me more user-friendly.
I added tests for EEGClassifier and EEGRegressor checking if predicted values are correct (there are some parts repeated in both tests (like MockModule, MockDataset), are you ok with that in tests or do you prefer to create a separated module for that?).