This project was created in collaboration with @lenaromanenko during the @spicedacademy boot camp.
The project includes two sub-projects:
- Implementing a Feed-Forward-Network from scratch
- Writing an Image Classifier with pretrained networks: MobileNetV2, ResNet50, VGG16
This introductionary sub-project guides you through the steps of writing your own feed-forward-network for a fundamental understanding of the core principles of the Deep Learning models: https://github.com/lenaromanenko/deep_learning/blob/main/building_neural_network_from_scratch/Feed-Forward-Network.ipynb
The goal of this project is to compare the different pre-trained networks and to build an image classifier using the best model. The program image_classifier.py accepts different pre-trained networks to find the anemone fishes (Nemo) in the image of an aquarium. Tested pre-trained networks include:
MobileNetV2
ResNet50
VGG16
In the direct comparison VGG16 provides the best results.
The predictions by VGG16 can be further improved by tweaking the image and the NumPy arrays before analyzing the picture.
Our model learns on the 224 by 224 Numpy arrays which are equal to 224x224 pixels frames. Since the image is too large, many fishes won’t fit into our 224x224 pixels frames. To solve this problem we can set the image width to a smaller size.
To analyze more objects on the picture we could iterate through the picture in smaller steps. This makes our predictions better but it also increases the time needed for predictions.
The fishes close to the borders of our picture would fit into less frames as compared to the fishes which are closer to the center. To solve this problem we can add a frame around our picture.
The effect of adding a border to the picture can be seen below: