Skip to content

prasadpatil99/Wonders-of-World-Image-Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MultiClass Image Classification

Aim to make Artificial Neural Network to learn Wonders Across World

Artificial Neural Network are inspired by human brain,
As inspired they are not complete biologically similar but learn correlations of images with help of multiple weights dedicated to the input according to its importance and according to the type of Neural Network characteristic.

Data Preprocessing

Data Gathered by downloading Google Images with Python Library

CNN

Model consist of learning from Convolution Neural Network (CNN).
CNN contains multiple layers like Convolution Layer, Maxpooling Layer, Average Pooling, Flatten Layer.

  • Convolution Layer - Convolution layer contains set of filters whose parameters are smaller than input. This filter is slided across the width and height of the input images and the dot products are calculated.Since the size of filter are smaller than input, each neuron learns that particular region. In short, the filter size is equal to size of neuron. Hence neurons here contains summary of input images without any loss.
  • Maxpooling Layer - Maxpooling is similar to convolution layer where image is slided through out, only difference is there is no matrix multiplication or dot product with filter instead maxpooling returns the feature having maximum value or greater correlation in an image. Hence neurons here hold most significance and sharpest feature of an image
  • Flattening Layer - It is used at a final stage to convert multi dimensional vector to one dimensional or in a single column

For executing CNN in Google Colab

Transfer Learning

Transfer Learning is a process where already pretrained model on one task is repurposed for a second similar/related task.
It gives features like saving training time, better performance without needing lot of data. Weights of pretrained model are already trained well so it can cut short time and computational cost.
Here is list of best pretrained models for transfer learning.

Data Preview

1 2 3
image image image
Venezuela Angel Falls Taj Mahal Stonehenge
image image image
Statue of Liberty Santorini Chich-n Itz - Mexico
image image image
Christ the Reedemer Statue Giant-s Causeway Pyramids of Giza
image image image
Niagara Falls Himalaya The Blue Grotto
image image image
Eiffel Tower Great Wall of China Antartica
image image image
Burj Khalifa Roman Colosseum Machu Pichu

Dataset

Dataset can be obtained from here

Dependencies

$ pip3 install --user --upgrade tensorflow
$ pip install Keras
$ pip install glob2
$ pip install opencv2
$ pip install scikit-learn

Reference

Author

  • Prasad Patil

Releases

No releases published

Packages

No packages published