Skip to content

ravi0dubey/Face-Recognition-Deep-Learning-Project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Face-Recognition-Deep-Learning-Project

Problem Statement

We need a solution where in model is not only able to detect the face whether its of human(s) or animal(s), it should also be able to identify their names. If we have 4 person and 2 animals (Cat,dog) whose faces gets clicked by the model and is stored in the system with the names of each person and the animals then once training happens model should be able to identify the person and the animals as soon as their faces appear on the camera.

Solution Proposed

In this project, the focus is to correctly detect the face and identify the face of the users/animals using deepinsight/InsightFace. InsightFace is an integrated Python library for 2D&3D face analysis. It efficiently implements a rich variety of state of the art algorithms of face recognition, face detection and face alignment, which optimized for both training and deployment.
github link of InsightFace https://github.com/deepinsight/insightface

Tech Stack Used

  1. Python
  2. MTCNN(Multi-task Cascaded Convolutional Networks) https://pypi.org/project/mtcnn/
  3. Keras to train the model

How to run the project

Step 1 : open your anaconda prompt (for windows user search inside start menu ) (for Ubuntu and Mac user you can open your terminal)

Step 2 : Create a new environment command : conda create -n facerecognition python==3.6.9 -y

Step 3 : activate your environment
conda activate facerecognition
Step 4 : conda install -c anaconda mxnet

Step 5 : conda install -c conda-forge dlib

Step 6 : Uninstall existing version of numpy and install numpy 1.16.1 version:
pip uninstall numpy
pip uninstall numpy
pip install numpy==1.16.1

Step 7: Install requirements.txt in the newly created environment
pip install -r requirements.txt

Step 8 : Installation and setup is done:
a). cd src
b). python app.py

Video link of project demo

https://youtu.be/MKOaQu3aXSs

How project was designed and build

  1. app.py-> Driver program of the project which invokes the camera and then call subsquent method from each modules to perform the operations of collecting pictures from camera,training it and prediction of the face .
  2. get_faces_from_camera.py-> Purpose is the get the 50 images from live feed of camera and crop the facial feature of the image and save it in 112 * 112 dimension
  3. faces_embedding.py-> Purpose of this class is to convert image into numerical value and saving it in pickel format. This process is called Face Embedding
  4. train_softmax.py-> Purpose is to train the model using embeddings of the image. Model is trained in batchsize of 8 with 5 epochs. Relu activation for hidden layer and softmax for output layer. Saving the output as pickle format.
  5. facePredictor.py-> Purpose is to do the prediction of the face.

Logic behind Face Recongition Technique

  1. Get input images of the human faces.
  2. Human faces needs to be labelled with the name.
  3. Input image of size 1280 * 720 needs to be cropped to size of 96 * 96 or 128 * 128 and then feed to deep learning algorithm.
  4. MTCNN detects the bounding box co-ordinates, co-ordinates of keypoints of the face(nose, mouth-right,right-eye,left-eye,mouth_left) and the confidence score of the face image)
  5. Then we need to do the Facial analysis for which we need to create small feature and create array of the features.
  6. We need to convert the image data into the numbers also called Embeddings.
    image
  7. Using Embeddings of the image(s) we can choose either Machine learning, Deep Learning or the Distance approach(Cosine Distance and Consine Similarity) to do the facial recognition.s
  8. In case of cosine similarity threshold is set to .8.
  9. If in case an unknown person/animal face comes up during prediction whose image has not been trained in such case model will show as unknown.
  10. We use tracking to stop processing the Face recognition if the face remains the same during live feed i.e no new faces comes up and the existing face has already been recognized. This is done to minimize the computation done for already identified face.
  11. If no face is found for recognition, we need to stop tracking to minimize the computation.

Training and Accuracy Loss

accuracy_loss

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published