Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] EEG-Inception #390

Merged
merged 31 commits into from
Sep 15, 2022
Merged

Conversation

bruAristimunha
Copy link
Collaborator

I am proposing to implement EEG-Inception. This model was used by the winners of last year BEETL Competition.

Is this suitable for library?

@codecov
Copy link

codecov bot commented Jun 9, 2022

Codecov Report

Merging #390 (6b2720a) into master (884a78b) will increase coverage by 0.21%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master     #390      +/-   ##
==========================================
+ Coverage   82.81%   83.03%   +0.21%     
==========================================
  Files          54       55       +1     
  Lines        3789     3838      +49     
==========================================
+ Hits         3138     3187      +49     
  Misses        651      651              

@robintibor
Copy link
Contributor

In general more models are very suitable thank you very much!! Can you add some tests with different input sizes just checking forward works and output shape is as expected?

@agramfort
Copy link
Collaborator

great stuff as @robintibor would say :)

to get this merged we would need a test (see how done for others) and a what's new entry

btw I checked the license of https://github.com/Grifcc/EEG/tree/90e412a407c5242dfc953d5ffb490bdb32faf022 and it's Apache2 so it's ok for here (BSD).

@bruAristimunha
Copy link
Collaborator Author

Sure, I will do the tests. Glad I'm helping =)

@robintibor
Copy link
Contributor

Do you need any support here? :) Is this still ongoing? :)

@bruAristimunha
Copy link
Collaborator Author

thanks for reminding me @robintibor, I will try to finish this week.

@bruAristimunha
Copy link
Collaborator Author

Hallo @robintibor!

I think I'm going to need some help X=
It is a little more difficult than expected to adjust the dimensions of the model.

@bruAristimunha
Copy link
Collaborator Author

Hello @sliwy, how are you?

I'm also in need of code review. I'm doing something wrong in the code and it's not working for dimensions other than the default value.

@sliwy
Copy link
Collaborator

sliwy commented Aug 31, 2022

Hi @bruAristimunha , I may try to help. I need more info from you when it fails, what is the error, maybe how to reproduce.

I think that we may be able to simplify a little bit by creating blocks and using nn.Sequential in the EEGInception implementation as well. I'm going to put some comments in the review.

Copy link
Collaborator

@sliwy sliwy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bruAristimunha really great contribution with the Inception model, needed in the library for sure!

from .functions import squeeze_final_output


class CustomPad(nn.Module):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like that, we can use it as well in this PR #400

braindecode/models/eeginception.py Outdated Show resolved Hide resolved
braindecode/models/eeginception.py Show resolved Hide resolved
braindecode/models/eeginception.py Outdated Show resolved Hide resolved
braindecode/models/eeginception.py Outdated Show resolved Hide resolved
braindecode/models/eeginception.py Outdated Show resolved Hide resolved
braindecode/models/eeginception.py Outdated Show resolved Hide resolved
braindecode/models/eeginception.py Show resolved Hide resolved
braindecode/models/eeginception.py Show resolved Hide resolved
self.sfreq = sfreq
self.n_filters = n_filters

scales_samples = [int(s * sfreq / input_window_samples) for s in scales_time]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if this should depend on input_window_samples. I think that if we increase the size of the input window we may still want to keep the same scales, corresponding to length of selected miliseconds.

Maybe we can define kernel lengths in the inception in seconds. To me it is more natural.

@bruAristimunha
Copy link
Collaborator Author

Hi @sliwy,

Strangely, the change to sequential increases the number of parameters. It still has a weird error when changing input values. I re-added the tests that were failing. I have no idea what's wrong or how to fix. I think I'll wait for the conclusion of the other PR and adapt based on the other code. The networks are similar.

@sliwy
Copy link
Collaborator

sliwy commented Sep 6, 2022

@bruAristimunha to debug easily the number of parameters change in the layers you can use https://github.com/TylerYep/torchinfo

@bruAristimunha
Copy link
Collaborator Author

bruAristimunha commented Sep 6, 2022

I refactored it based on the other model, and here is the result of the torchinfo summary. I will go into more depth on this issue, but possibly it will take me a while.

===============================================================
Layer (type:depth-idx)                   Param #
=================================================================
EEGInception                             --
├─ELU: 1-1                               --
├─Ensure4d: 1-2                          --
├─Expression: 1-3                        --
├─_InceptionBlock: 1-4                   --
│    └─ModuleList: 2-1                   --
│    │    └─Sequential: 3-1              6,016
│    │    └─Sequential: 3-2              2,984
│    │    └─Sequential: 3-3              1,492
├─AvgPool2d: 1-5                         --
├─AvgPool2d: 1-6                         --
├─Sequential: 1-7                        --
│    └─CustomPad: 2-2                    --
│    └─Conv2d: 2-3                       2,304
│    └─BatchNorm2d: 2-4                  24
│    └─ELU: 2-5                          --
│    └─AvgPool2d: 2-6                    --
│    └─Dropout: 2-7                      --
│    └─CustomPad: 2-8                    --
│    └─Conv2d: 2-9                       288
│    └─BatchNorm2d: 2-10                 12
│    └─ELU: 2-11                         --
│    └─AvgPool2d: 2-12                   --
│    └─Dropout: 2-13                     --
├─Sequential: 1-8                        --
│    └─Flatten: 2-14                     --
│    └─Linear: 2-15                      3,474
│    └─Softmax: 2-16                     --
=================================================================
Total params: 16,594
Trainable params: 16,594
Non-trainable params: 0
=================================================================
=================================================================
Layer (type:depth-idx)                   Param #
=================================================================
EEGInception                             --
├─ELU: 1-1                               --
├─Ensure4d: 1-2                          --
├─Expression: 1-3                        --
├─_InceptionBlock: 1-4                   --
│    └─ModuleList: 2-1                   --
│    │    └─Sequential: 3-1              6,016
│    │    └─Sequential: 3-2              2,984
│    │    └─Sequential: 3-3              1,492
├─AvgPool2d: 1-5                         --
├─AvgPool2d: 1-6                         --
├─Sequential: 1-7                        --
│    └─CustomPad: 2-2                    --
│    └─Conv2d: 2-3                       2,304
│    └─BatchNorm2d: 2-4                  24
│    └─ELU: 2-5                          --
│    └─AvgPool2d: 2-6                    --
│    └─Dropout: 2-7                      --
│    └─CustomPad: 2-8                    --
│    └─Conv2d: 2-9                       288
│    └─BatchNorm2d: 2-10                 12
│    └─ELU: 2-11                         --
│    └─AvgPool2d: 2-12                   --
│    └─Dropout: 2-13                     --
├─Sequential: 1-8                        --
│    └─Flatten: 2-14                     --
│    └─Linear: 2-15                      3,474
│    └─Softmax: 2-16                     --
=================================================================
Total params: 16,594
Trainable params: 16,594
Non-trainable params: 0
=================================================================

@bruAristimunha
Copy link
Collaborator Author

thank you so much! I hadn't noticed the wrong formatting, I think I'll stop for today ;p

in fact, I think I sent a duplicate. But good idea, I'll do it (module vs sequential)

@bruAristimunha
Copy link
Collaborator Author

Hello guys, @robintibor and @sliwy,

I've talked to @cedricrommel, and he's going to take a closer look at this code for the next week.

@cedricrommel
Copy link
Collaborator

cedricrommel commented Sep 15, 2022

Hello guys, @robintibor and @sliwy,

I've talked to @cedricrommel, and he's going to take a closer look at this code for the next week.

I found a few problems with the layer dimensions and fix them. Both tests are now passing. For the second test I found in the authors official code provided by @bruAristimunha that they set all biases to False except in the first inception block. After fixing that and the window size in the test, we now get the number of params announced in the paper.

@cedricrommel cedricrommel marked this pull request as ready for review September 15, 2022 15:33
@cedricrommel
Copy link
Collaborator

In case it can help:
Capture d’écran 2022-09-15 à 17 36 11

Copy link
Collaborator

@agramfort agramfort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bruAristimunha can you add this model to the doc before merging?

🙏

@cedricrommel
Copy link
Collaborator

Apparently math.prod was introduced after python 3.7 😓 Can you fix that @bruAristimunha?

@bruAristimunha
Copy link
Collaborator Author

Done @agramfort and @cedricrommel,

Many thanks, @sliwy and @robintibor, for the review, and thank you, @cedricrommel, for the hard work on this model! I spent a lot of time trying to find these little details, and I am pleased that you, as an expert professional, managed to find them.

Happy to contribute to the library with this model that won last year's competition. I am open for further revisions if you feel it necessary, @agramfort, @robintibor and @sliwy.

@agramfort agramfort merged commit 9cc8c68 into braindecode:master Sep 15, 2022
@agramfort
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants