Skip to content
An attempt at the Stanford Cars dataset (https://ai.stanford.edu/~jkrause/cars/car_dataset.html). Largely taken from foamliu's code (https://github.com/foamliu/Car-Recognition) modified to require less pre-processing and be more user friendly.
Jupyter Notebook Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
custom_layers
CNN_howto.pdf
CarModel.py
Cars.ipynb
README
conv_block.py
guess.ipynb
guess.py
identity_block.py
res_net.py
resnets_utils.py

README

R-CNN designed to handle the Standford Cars dataset.
Largely copied from: https://github.com/foamliu/Car-Recognition
However adjusted to require slightly less pre-processing, to be slightly less memory heavy, to be slightly more user-friendly, to allow url as input for making predictions (as opposed to direct file upload), and to include jupyter notebook files ready to deploy in the cloud.

To begin, extract all files into a common directory. In the same directory create a sub-directory called 'datasets'.
In here, you must put the training and test images, however you will want to combine all images into two files:
train_cars.h5 and test_cars.h5.
To create these two files, use the 'h5write.py' script found here: https://github.com/EvanEames/h5_ReadWrite (note that you will also need the 'devkit/train_perfect_preds.txt' and 'devkit/test_perfect_preds.txt' files to create the h5 files. These txt files should be packaged with the cars dataset).
Once you have created them, place these h5 files in the datasets directory, and then run CarModel.py.

Once the weights are well trained, use the guess.py file to make predictions based on url images.
You can’t perform that action at this time.