Skip to content

Our objective is Classification of Citrus Leaves Data using CNN classifier. Here, we are comparing performances of different optimizers and hyper-parameters on the basis of different metrics like Accuracy, Precision, Recall.

Notifications You must be signed in to change notification settings

caped-crusader16/Classification-of-Citrus-Leaves-using-CNN-classifier

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Citrus Leaf Classification with CNN

In this assignment, our objective is Classification of Citrus Leaves Data using a CNN classifier. Here, we are comparing performances of different optimizers and hyper-parameters on the basis of different metrics like Accuracy, Precision, Recall.

Table of Content


Introduction

Image classification is amongst the fundamental tasks handled by CNN. The goal in classification is to assign a label to an image. The classification comprehends the image. In this assignment, the objective is to understand, design, and implement a CNN classifier. We must not just implement the CNN classifier but understand it as well.


Approach

Dataset and preparation

The original dataset contains 759 images of healthy and un-healthy citrus fruits and leaves. However, as of now the owners only export 594 images of citrus leaves with the following 4 labels: Black Spot, Canker, Greening, and Healthy. The exported images are in PNG format and have the dimension 256×256.

ImageDataGenerator was used to generate training, validation and testing data [60%,20%,20%] from the dataset. This allowed us to randomly augment the training data by zooming in and out (30%), rotating (±180°), height and width shifting (30%), and horizontal and vertical flipping. Finally, both training and validation images were re-scaled pixel-wise to the intensity in range [0,1].

Model

We made our custom model for this task.

A.) Architecture

Layers:

  • Convolution (16 filters, 3×3 kernel, ReLU)
  • MaxPooling (/2)
  • Convolution (32 filters, 3×3 kernel, ReLU)
  • MaxPooling (/2)
  • Convolution (64 filters, 3×3 kernel, ReLU)
  • MaxPooling (/2)
  • Convolution (64 filters, 3×3 kernel, ReLU)
  • MaxPooling (/2)
  • Convolution (64 filters, 3×3 kernel, ReLU)
  • MaxPooling (/2)
  • Flatten [4092 units]
  • Dense (512, ReLU)
  • Dense (4, Sigmoid)

CNN Architecture

Inspiration

The inspiration for our architecture is drawn from Alexnet. We have a simple dataset so it was important to control the number of parameters in order to control variance. This particular model has been reached after more than a hundred tries and iterative steps in order to improve it. All tries at adding regularization, changing activation functions, ordering of layers, number of layers and more, have resulted in very unfavorable accuracies. This is a sweet spot we had reached.

B.) Training

I. Optimizers

We have tried Stochastic Gradient Descent (SGD) withmomentum, Adam and RMSprop. SGD performed theworst and RMSprop performed the best.

1.} Stochastic Gradient Descent, with momentum

SGD

2.} Root Mean Square Propogation

RMSPROP

3.} Adaptive Moment Estimation

ADAM

II. Hyperparameters

  • Number of Epochs
  • Batch Size

We have kept the number of epochs as 35 as to neither underfit nor overfit the model, and the ratio of batch sizes of training and validation data (32 and 8) close to the ratio of their share in the dataset (3).


Results

Performance metrics

Loss Accuracy Prop Precision Recall
SGD 0.9554 0.4832 0.6321 0.3408
RMS Prop 0.5202 0.7849 0.8049 0.7374
Adam 0.4764 0.8156 0.4140 0.9944

Learning curves

SGD RMS Prop Adam
Loss Curves
Accuracy Curves
Precision Curves
Recall Curves

Discussion of Results

We can see that the maximum accuracy reached by the two best optimizers is about 80%, which is not bad considering the small size of dataset. From our observation, a sufficiently low learning rate along with a large number of epochs result in the most effective validation and testing accuracy. Learning curves served as a guide to determine the proper learning rate. In case of RMSprop, we can see that the learning curves are quite 'jumpy' or fluctuating, however, overall, with increase in number of epochs, the validation loss decreases and validation accuracy increases, so we know that the model is learning.

A possible solution was to further decrease the learning rate and use an optimizer that utilizes momentum. However, when that was tried, we found that it results in relatively poorer loss and accuracy and hence the model doesn't train as well as desired. On the other hand, the decrease in learning rate even by a factor of 5 resulted in the weights to get stuck at some local minimum. Therefore, even with the volatility in RMSprop, the testing metrics are satisfactory, hence it is a good choice and learning can be considered satisfactory.

Adam is the also a very good option and yields balanced metrics. It has accuracy close to RMSprop yet significantly higher precision and recall than the latter. Overall, we can say that Adam is the best choice of optimizer in this case.


Key Takeaways

In this work, we have implemented classification of leaf diseases using a custom made Convolutional Neural Network Neural Style Transfer, using different optimizers for training. Their results have been discussed and compared. Here as shown in the table, Adam and RMSprop have performed almost the similar where Adam seems to be the best choice as all the metrics have good and balanced values. Our final testing accuracy is ~80%.


Platform

Google Colab

Installation Guide

  • Clone this repository using
$ git clone https://github.com/Engineer1999/CSP520-Computer-Vision.git
$ cd CSP520-Computer-Vision
  • Install the dependencies using
$ pip install -r requirements.txt
  • To run locally, launch jupyter notebook using $ jupyter notebook or upload the .ipynb file on Google Colab.

References

  • V. Fung, “An overview of resnet and its variants,” 17-Jul-2017. [Online]. Available: Link.
  • Vgg16 - convolutional network for classification and detection,” 24-Feb-2021. [Online]. Available: Link
  • Keras Conv2D: Working with CNN 2D Convolutions in Keras”. [Online]. Available: Link.

Contribution

About

Our objective is Classification of Citrus Leaves Data using CNN classifier. Here, we are comparing performances of different optimizers and hyper-parameters on the basis of different metrics like Accuracy, Precision, Recall.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published