Facial recognition, an innate ability of humans, presents challenges when transferred to machines. This project explores the use of Linear Discriminant Analysis (LDA) in facial recognition, offering practical insights into its application.
Facial recognition, highlighted by a 2015 INTERPOL release, is a superior method for non-intrusive surveillance. Despite its importance, challenges such as illumination, viewpoints, and facial expressions persist.
- Image Matrix Construction: Flatten images into a matrix for subsequent analysis.
-
Class-Specific Mean Vectors: Derive
$D$ -dimensional mean vectors for each class. -
Global Mean Vector: Calculate the
$D$ -dimensional global mean vector. -
Scatter Matrix Construction: Build within-class
$(S_w)$ and between-class$(S_b)$ scatter matrices. -
Eigenvalue and Eigenvector Computation: Find the eigendecomposition of
$(S_w^{-1}S_b)$ . -
Dimensionality Reduction: Rank eigenvalues, select top
$S$ eigenvectors to create matrix$U$ . -
Projection onto
$S$ -dimensional subspace: Project the original image matrix onto the new subspace.
For the case study, Kaggle's celebrity dataset (link: Celebrity Face Image Dataset) with 17 celebrities and 100 images each is used.
Check the Python code for the LDA implementation from scratch.
Note
The repository also includes separate ROI (Region of Interest) code segments and testing code segments that weren't mentioned in the paper. Referring to the repository provides everything in one place without having to do things manually.
- Python
- Libraries: numpy, PIL
- Install required libraries.
- Set dataset parameters (Z, C, M, N).
- Run the provided Python code for image classification.
For any inquiries, contact jothiram@nitk.edu.in.