Skip to content

Implementation of EfficientNet: Rethinking Model Scaling for CNNs

License

Notifications You must be signed in to change notification settings

sd2001/EfficientNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EfficientNet-Rescon

Implementation of EfficientNet: Rethinking Model Scaling for CNNs

Developed by Team Cygnus in SRM MIC Rescon 1.0

Team Members:

Click »here« for the original paper

Click »here« for the presentation

What is EfficientNet?

Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, the Efficient Net-B6-Wide achieves state-of-the-art 91.12% top-1 accuracy on ImageNet(480M Parameters), while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet.

Inference

The empirical research on EfficientNets shows that it is critical to balance all dimensions of network width/ depth/ resolution, and surprisingly such balance can be achieved by simply scaling each of them with constant ratio.

How does Compound Scaling work and is different from old paradigms

image


  • Scaling up any dimension of network width, depth, or resolution improves accuracy, but the accuracy gain diminishes for bigger models.
  • In order to pursue better accuracy and efficiency, it is critical to balance all dimensions of network width, depth, and resolution during ConvNet scaling. Intuitively, the compound scaling method makes sense because if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image

How Efficient net performs when compared to past transfer learning models.

image

Earlier GPipes were provided State of Art Results on ImageNet, but are computationally the most expensive. Currently EfficientNets are by far the best with Efficientnet-B6 wide and EfficientNet L2 ticking top accuracies of 91.12% and 91.02% respectively.

Acknowledgements

To run it locally

  • Fork the project

  • Open Terminal in your desired folder

git clone https://github.com/sd2001/EfficientNet.git

cd EfficientNet

python3 -m venv env

source env/bin/activate

pip install -r requirements.txt

cd src
 
python architecture.py

To check the Notebooks

  • In the Efficient Folder
jupyter notebook
......................................

About

Implementation of EfficientNet: Rethinking Model Scaling for CNNs

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published