No description, website, or topics provided.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
1. Pre-processing.ipynb
2. Training model.ipynb
3. VGG16 Bottleneck.ipynb
4. VGG16 FineTuning.ipynb
README.md
VGG16 to predict labels.ipynb
bottleneck_last_layer_viz.png
dog-1210559_960_720.jpg
imagenet_class_index.json

README.md

What is Transfer Learning?

13elx7-dtqwyqs0izn9fng

The ability of a system to recognize and apply knowledge and skills learned in previous tasks to novel tasks (or new domains).

c8wh47frr3658meccaupra

Keras Pre-trained Models

Keras - High-level neural networks API, written in Python. Modular, minimalistic and easy to use. Runs on top of Theano, Tensorflow, CNTK.

Keras applications (Model Zoo) contains following pre-trained models:

Xception VGG16 VGG19 ResNet50 InceptionV3

Why to use pre-trained models ?

It is relatively rare to have a dataset of sufficient size. Instead, it is common to use pretrained ConvNet which was trained on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the ConvNet either as an: Initialization or A fixed feature extractor for the task of interest. And training of model from scratch requires more time then training the dense layers of pre-trained models.

Steps for using pre-trained models:-

1. Feature extractor :

Remove the Fully Connected (Bottleneck layer) from pre-trained VGG16 model. Run images from Dataset through this truncated network to produce image vectors. Use these vectors to train another classifier to predict the labels in training set. Prediction is made with second classifier against image vector. f6a9qsd1qlkcoe-05hid_w

2. Fine Tuning :

We train the model partially. Remove the Fully Connected (Bottleneck layer) from pre-trained VGG16 model. Make weights of all convolution blocks non-trainable(frozen)except the last few convolutional layers. Attach our own classifier to the bottom. Train the resulting classifier with very low learning rate. Computationally more expensive but still cheaper than training network from scratch. More robust model. f6a9qsd1qlkcoe-05hid_w

Results

Input

iyg5vdsptusn4bydauzthq

Output

bottleneck_last_layer_viz