This repository contains a number plate reader with a Flask app, using a YOLO model for number plate extraction and a CNN trained from scratch on an OCR dataset for character recognition.
Assuming you are in the Number-Plate-Recognition directory.
- Create a Python virtual enviroment using
python3 -m venv venv - Activate venv using
source venv/bin/activatefor macos orvenv\Scripts\activatefor windows - Install libraries using
pip install -r requirements.txt
If you want to run the interactive Flask app, navigate to the number_plate_app directory.
- Run
python3 app.pyto locally host the app - Head to
http://127.0.0.1:5000in any browser to load - Choose an image with a number plate in to be recognised
- Upload the image and each stage of the recognition process is shown
If you want to walkthrough the code and see how well it performs on multiple examples, navigate to the number_plate_code directory.
- Open
testing_area.ipynb - Add any test images to
example_data - Run cells and analyse how well it perfroms
The number plate of a car can be obtained by using an object detection model. State of the art currently is YOLO (You Only Look Once) which is a deep CNN. The model used for number plate extraction can be found here on The Hugging Face.
A number plate can be read by extracting each character from the number plate, passing it into a character recognition CNN model and then stringing together a word.
Each of these stages is explained in more detail below.
The model finds a bounding box of what it thinks is a number plate. With a set confidence level, we can obtain the bounding box predicted by the model, and extract the number plate from the original image.
One similarity among all number plates is that the letters are black. Therefore, after applying image processing techniques such as edge sharpening using the Laplacian operator and Otsu thresholding, we can create a binary image with the characters as the foreground and everything else as the background.
To extract a character, we can use the contours method from OpenCV, which identifies foreground objects within an image. This method creates a bounding box around each character, allowing us to extract that portion of the image.
Once we have the characters, we can feed each one into the model, obtain the predictions, and concatenate them into a string.
The neural network is trained on the standard OCR dataset which contains 50k images of characters.
To increase the number of examples, the training data is augmented 5 times per image from a mix of rotation, translation and zooming. This attains a total train set of size 100k.
Constants used during training:
- Loss: Categorical crossentropy
- Epochs: 10
- Optimiser: Adam
After training, the test set attains an accuracy of 96.7%.
Looking at the loss and accuracy per epoch we see that there are no signs of overfitting:
The confusion matrix shows excellent results overall, however classes 0, 4, and 24 had misclassifications with numbers 0, 3, and letter P; adding more training data or augmentation could help improve accuracy.
In addition, looking at a few test examples:










