Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TUM #2

Closed
Sun-Fan opened this issue Nov 18, 2018 · 4 comments
Closed

TUM #2

Sun-Fan opened this issue Nov 18, 2018 · 4 comments

Comments

@Sun-Fan
Copy link

Sun-Fan commented Nov 18, 2018

hello, thanks for your paper. In your paper, you have used 8 Tums.I want to ask if this will cause the low fps. After all, there are many Convolution layers.Thanks!

@muye5
Copy link

muye5 commented Nov 19, 2018

Both TUM and SFAM are a little complex,it doesn't make sense why this architecture could speed up so fast. Both the hourglass TUM and fully connection attention are not friendly to speed. Wait for the code.

@qijiezhao
Copy link
Collaborator

qijiezhao commented Nov 21, 2018

TUM is a Thinned U-shape Module, it is not as slow as you think. The parameters of A TUM is even less than a Conv layer 1024x3x3x1024.
SFAM is also not as complex. First, concatenation; then implement SE attention for multi-scale features. Only 6 SE attentions. As a comparison, SE-ResNet101 have more and more SE attentions than ResNet101.

@leo-XUKANG
Copy link

i have tried it in darknet ,and where the network get deeper ,then the network will be slower,

@HoracceFeng
Copy link

@leo-XUKANG very interesting try. I am also curious why M2DET will have such fast speed. Would you share your code of M2DET in darknet?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants