-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Position of BN in BAM. #49
Comments
2nd instance of possible inconsistency:The paper mentions on So it is understod that the batch norm is applied at the end of the attention-module/MODELS/bam.py Line 41 in 459efad
Where attention-module/MODELS/bam.py Line 39 in 459efad
|
One might suspect that the final BN layers can be added in the attention-module/MODELS/bam.py Line 48 in 459efad
it becomes clear that the BN layer is not applied before the combination of |
I think there is possibly an inconsistency in the location of the application of the Batch Normalization between the paper and the code.
1st instance of possible inconsistency:
The paper mentions on
Page 4, Section 3, subsection Channel attention branch
that:So it is understod that the batch norm is applied at the end of the MLP, i.e. after the final layer. However, from the implementation, shows final layer in
forward()
function as:attention-module/MODELS/bam.py
Line 25 in 459efad
Where
self.gate_c
's final layer is defined as:attention-module/MODELS/bam.py
Line 22 in 459efad
The text was updated successfully, but these errors were encountered: