You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for your work and for open sourcing your code!
As you don't list mail addresses in the paper I abuse the Github issue tracker to ask a question about the paper:
Have you compared your work to MNasNet (https://arxiv.org/abs/1807.11626)?
MNasNet is stronger than Mobilenet v2 across different FLOP settings and achieves 75% TOP-1 imagenet accuracy with a comparable compute envelope as you used for Figure 3c. Is there a reason why you excluded MNasNet from your comparisons?
Thanks,
Christoph
The text was updated successfully, but these errors were encountered:
MNasNet is a more on an architectural space search rather than light-weight model design and is complementary to our work.
MNasNet uses mobileNet blocks and we have shown that EESP modules are better in learning representations than MobileNet modules. If a similar search is done using EESP module, I believe that will give better performance.
P.S: Searching over 8K design choices, MNAsNet delivers slightly better performance to our work (73 top-1 with 3.6 M params and 270 M FLOps) while our unit delivers 72.1 under similar budget.
Hi, thank you for your work and for open sourcing your code!
As you don't list mail addresses in the paper I abuse the Github issue tracker to ask a question about the paper:
Have you compared your work to MNasNet (https://arxiv.org/abs/1807.11626)?
MNasNet is stronger than Mobilenet v2 across different FLOP settings and achieves 75% TOP-1 imagenet accuracy with a comparable compute envelope as you used for Figure 3c. Is there a reason why you excluded MNasNet from your comparisons?
Thanks,
Christoph
The text was updated successfully, but these errors were encountered: