LIST OF MATCHED FILES - https://docs.google.com/document/d/1UF5cFRAXYJnu-SjTGbprjOEAHKGbL2jKHS_1z-nb250/edit?usp=sharing
facial recognition for missing person.txt has the entire synopsis of the project
The manin.ipynb file has the entire code for training and testing
Every cell is commented to explain what each cell does
The cells run datacropping, facial feature extraction, training using cnn, and testing on the embeddings
Cell 1 - Runs the data crop, applies opencv techniques and find the face within the picture and crops it
Cell 2 - Runs renaming of the file to suitable way
Cell 3 - Runs facial extraction using LBHP and HOG, and extracts the features using CNN
Cell 4 - Runs the testing of image and finds the most closest match from the test folder
- OpenCV
- Numpy
- Pandas
- BeautifulSoup4
- Flask
- Matplotlib
train_dataset_cropped - the training data images
final_dataset_cropped - testing data images
Vpree - Vpree approach for training and testing to create embeddings
simple run the files present within the folder</br
face_eye_ears_mapping - holds a test image for te input to face_mappings.py file, which creates mappings for eyes nose ears as trains
face_recogntion - training using CNN approach
data_rename.py - Run to rename image datasets to specific requirement
face_detection_and_crop - To detect faces and crops them and stores
Modules IMPLEMENTED :-
This crops the image to focus only on the facial features of a person
Using HOG extrating the features from the nose, ears and eyes
Using CNN to train and extract features and find the exact mapping the in test set
Used LBHP with the CNN model to extract the histogram values and the hamming distance between each image embedding is used to compare the closeness of the image
Outputs from VPtree method ![image](https://github.com/nishu88/KSP-IPH-2019-table15/blob/master/table15/Outputs/vptree.JPG)
This actively monitors a online LIVE cctv link and makes a search database
With the help of social media search, we can easily search for people on social media platforms. Our mapper uses Facebook's existing APIs which takes the name and image of the missing person as an input. For using this API we must pass our facebook credentials as parameters to the API in order to fetch results. We get results in the form of a table embedded insisde an HTML file, using Django framework. Each row of the table contains the profile pic on facebook, name of the person and the profile link.
In order to easily interface with the the facial detection, we created a mobile application on Android Platform, by which the user can upload the image to a server which will do the processing and fetch back the results. With the help of OKHTTP API which provides an easy framework for sending and receiving large objects over HTTP. The app asks the user to select the image of the missing person from the device. It then sends this to the server by asking the user the IPv4 Address and Port Number of the server.
To make it easily to process, we are using a python based Flask Server running on a particular port number on the server. On the server side, it fetches the file that has been sent and saves it. After this, with the help of Local Binary Patterns Histogram (LBPH) algorithm implemented in OpenCV, the procecss calculates the confidence and fetches the ID. In case it is unable to fetch any ID, it returns as "unknown". After this computation is done, it fetches the result back to the Android client.