ML | Face Recognition Using Eigenfaces (PCA Algorithm)
Last Updated : 24 Sep, 2021
Comments
Improve
Suggest changes
Like Article
Like
Report
In 1991, Turk and Pentland suggested an approach to face recognition that uses dimensionality reduction and linear algebra concepts to recognize faces. This approach is computationally less expensive and easy to implement and thus used in various applications at that time such as handwritten recognition, lip-reading, medical image analysis, etc.
PCA (Principal Component Analysis) is a dimensionality reduction technique that was proposed by Pearson in 1901. It uses Eigenvalues and EigenVectors to reduce dimensionality and project a training sample/data on small feature space. Let's look at the algorithm in more detail (in a face recognition perspective).
Training Algorithm:
Let's Consider a set of m images of dimension N*N (training images).
Training Image with True Label (LFW people's dataset)
We first convert these images into vectors of size N2 such that:
x_{1},x_{2},x_{3}...x_{m}
Now we calculate the average of all these face vectors and subtract it from each vector
Now we take all face vectors so that we get a matrix of size of N2 * M.
A = \begin{bmatrix}
a_{1} &a_{2} &a_{3} &.... & a_{m}
\end{bmatrix}
Now, we find Covariance matrix by multiplying A with AT. A has dimensions N2 * M, thus AT has dimensions M * N2. When we multiplied this gives us matrix of N2 * N2, which gives us N2 eigenvectors of N2 size which is not computationally efficient to calculate. So we calculate our covariance matrix by multiplying AT and A. This gives us M * M matrix which has M (assuming M << N2) eigenvectors of size M.
Cov = A^{T}A
In this step we calculate eigen values and eigenvectors of above covariance matrix using the formula below.
A^{T}A\nu_{i} = \lambda_{i}\nu_{i} \\
\\
AA^{T}A\nu_{i} = \lambda_{i}A\nu_{i} \\
\\
C{}'u_{i} = \lambda_{i}u_{i}
where,
C{}' = AA^{T}
and u_{i} = A\nu_{i}
From the above statement It can be concluded that C_{}' and C have same eigenvalues and their eigenvectors are related by the equation u_{i} = A\nu_{i}. Thus, the M eigenvalues (and eigenvectors) of covariance matrix gives the M largest eigenvalues(and eigenvectors) of C_{}'
Now we calculate Eigenvector and Eigenvalues of this reduced covariance matrix and map them into the C_{}' by using the formula u_{i} = A\nu_{i}.
Now we select the K eigenvectors of C_{}' corresponding to the K largest eigenvalues (where K <
M). These eigenvectors has size N2.
In this step we used the eigenvectors that we got in previous step. We take the normalized training faces (face - average face) x_{i} and represent each face vectors in the linear of combination of the best K eigenvectors (as shown in the diagram below).
x_{i} -\psi = \sum_{j=1}^{K} w_{j}u_{j}
These u_{j} are called EigenFaces.
EigenFaces
In this step, we take the coefficient of eigenfaces and represent the training faces in the form of a vector of those coefficients.
We take the vector generated in the above step and subtract it from the training image to get the minimum distance between the training vectors and testing vectors
If this e_r is below tolerance level Tr, then it is recognised with l face from training image else the face is not matched from any faces in training set.
Test images With prediction
Advantages:
Easy to implement and computationally less expensive.
No knowledge (such as facial feature) of the image required (except id).
Limitations :
Proper centered face is required for training/testing.
The algorithm is sensitive to lightining, shadows and also scale of face in the image .
Front view of the face is required for this algorithm to work properly.
We use cookies to ensure you have the best browsing experience on our website. By using our site, you
acknowledge that you have read and understood our
Cookie Policy &
Privacy Policy
Improvement
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.