Skip to content

Network Models

Ankit Vaghela edited this page Apr 28, 2018 · 3 revisions

Simple Convolutional Network

While we were working on a small data, a simple convolutional network was used for classification purpose. This network was trained for some popular of classes and on a particularly targeted dataset. The network was 4 layer deep and was running on epoch=20 . The droupout value was set to 0.5.

Train/Validation accuracy was around 0.85 and results seemed to be quite promising.



Residual Learning for Image Recognition

Deep convolutional neural networks have led to a series of breakthroughs for image classification. Many other visual recognition tasks have also greatly benefited from very deep models. So, over the years there is a trend to go more deeper, to solve more complex tasks and to also increase /improve the classification/recognition accuracy. But, as we go deeper; the training of neural network becomes 4 difficult and also the accuracy starts saturating and then degrades also. Residual Learning tries to solve both these problems. ResNet50 is a 50 layer deep Residual Network.

For the network:

  • inputshape: The input shape in the form (channels, brows, cols)
  • num outputs: The number of outputs at final softmax layer
  • block fn: The block function to use. This is either ‘basic block‘ or ‘bottleneck‘.
  • The original paper used a basic block for layers < 50 repetitions: Number of repetitions of various block units. At each block unit, the number of filters is doubled and the input size is halved
  • Both bottleneck and basic residual blocks are supported. To switch them, simply provide the block function here.