Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EegModuleMixin extension #514

Merged
merged 13 commits into from
Sep 8, 2023

Conversation

sliwy
Copy link
Collaborator

@sliwy sliwy commented Sep 8, 2023

Adding input_shape, get_output_shape, to_dense_prediction_model to EEGModuleMixin, cf #457

Deprecating get_output_shape and to_dense_prediction_model, will be removed in version 1.0.

When we have a proper way of removing logsoftmax we may replace functions with methods everywhere.

Solve #432 with raising error suggesting that input size may be too small.

@codecov
Copy link

codecov bot commented Sep 8, 2023

Codecov Report

Merging #514 (c58853b) into master (a29166e) will increase coverage by 0.06%.
The diff coverage is 94.73%.

❗ Current head c58853b differs from pull request most recent head 94f2959. Consider uploading reports for the commit 94f2959 to get more accurate results

@@            Coverage Diff             @@
##           master     #514      +/-   ##
==========================================
+ Coverage   84.33%   84.39%   +0.06%     
==========================================
  Files          63       63              
  Lines        4514     4525      +11     
==========================================
+ Hits         3807     3819      +12     
+ Misses        707      706       -1     

📢 Have feedback on the report? Share it here.

@sliwy
Copy link
Collaborator Author

sliwy commented Sep 8, 2023

Responding to @tgnassou comment:

function get_output_shape only works for class which heritates from nn.Sequential. I would like to use it to replace the function _len_last_layer of SleepStagerChambon (

def _len_last_layer(self, n_channels, input_size):
).

Actually the problem is in trying to use get_output_shape in nn.Module in a moment when part of a model that is used in forward does not exist. I don't see an easy fix. Maybe creating a method that performs forward until last fc layer can be possible and have a kind of convention how this method should be named.

@tgnassou what kind of error do you receive? maybe it would be easier to think of solution

@sliwy
Copy link
Collaborator Author

sliwy commented Sep 8, 2023

and yes, for partially initialized models it will work only for nn.Sequential because nn.Sequential forward can be called even when all the modules are not yet added to the model.

In case of nn.Module it will be problematic to do that because parts of model do not exist and are used directly in the forward. We may think of try-exception clause in case some modules are not defined yet? but I don't think it;s worth implementing

@PierreGtch
Copy link
Collaborator

Responding to @tgnassou comment:

function get_output_shape only works for class which heritates from nn.Sequential. I would like to use it to replace the function _len_last_layer of SleepStagerChambon (

def _len_last_layer(self, n_channels, input_size):

).

Actually the problem is in trying to use get_output_shape in nn.Module in a moment when part of a model that is used in forward does not exist. I don't see an easy fix. Maybe creating a method that performs forward until last fc layer can be possible and have a kind of convention how this method should be named.

@tgnassou what kind of error do you receive? maybe it would be easier to think of solution

@sliwy, @tgnassou, once @brunaafl will be finished with #520, the next step will be to make methods swap_final_layer and drop_final_layer that we allow to easily replace the final layer with a placeholder identity function. This way we will be able to also compute the output shape for the nn.Module!

@sliwy
Copy link
Collaborator Author

sliwy commented Sep 8, 2023

@PierreGtch It's the convention I thought of, if we have a specific name of the last layer then it's possible

braindecode/models/base.py Show resolved Hide resolved
braindecode/models/util.py Show resolved Hide resolved
braindecode/models/util.py Show resolved Hide resolved
@PierreGtch PierreGtch merged commit 8cbd7b2 into braindecode:master Sep 8, 2023
11 of 12 checks passed
dcwil pushed a commit to dcwil/braindecode that referenced this pull request Sep 11, 2023
* added output_shape, convert_to_regressor, and to_dense_prediction_model to EEGModuleMixin

* added tests for output_shape and to_dense_prediction_model in EEGMoudleMixin

* change to use to_dense_prediction_model method instead of function

* reverting convert_to_regressor changes

* update tests to use chs_info instead of ch_names

* fix typing hint to work with older pythons

* added n_times when creating model in test_scoring.py

* change output_shape property to get_output_shape method

* changed deprecation to use sklearn

* fix get_output_shape in example

* added whats new

---------

Co-authored-by: PierreGtch <25532709+PierreGtch@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants