New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about out_a and out_b #6
Comments
The operation of two separate convolutions in the code is identical to a convolution firstly and then followed by splitting. |
Actually, this two implementations have different computational complexity. BTW, I wonder that the computational complexity and accuracy reported in the paper is based on which implementation? Thanks. |
This is a misunderstanding. In Figure.2, the 1x1 convolutions before and after the 3x3 convolution in the bottleneck structure are omitted. Self-calibrated convolution is to replace the 3x3 convolution. Thus the implementation is identical to the paper and the computational complexities are roughly the same. The computational complexity and accuracy reported in the paper are obtained with exactly the same code we released here. |
Okay, I see. You mean this implementation splits the first 1x1 convolution in the original bottleneck into two 1x1 convolutions. In this way, two BN+ReLU layers are required. Thanks for your reply and now it's clear to me. |
In Figure2 in the paper, it seems you directly get two new feature(
'X1','X2'
) by split the'InputX'
in two portions without extra params. But in the code, you get them by two convolutions.SCNet/scnet.py
Line 103 in c0b5bd6
Are there any differences between the two approaches?
The text was updated successfully, but these errors were encountered: