11/15 Notes

Face Recognition

  1. Appearance Based - using raw pixel intensities, completly agnostic to what the actual picture is
    • Subspace analysis
      • Linear subspace analysis
        • PCA - Principle component analysis
        • LDA - Linear Discriminate analysis
        • ICA - Independant component analysis
      • Non Linear subspace analysis
        • KPCA - Kernel Principle component analysis
        • KLDA - Kernel Linear Discriminate analysis
        • KICA - Kernel Independant component analysis
  2. Model Based Methods - model individual parts of the face to understand picture
    • EBGM - Elastic bunch graph matching - specific for face
    • AAM - Active appearence model - can be used for more
  3. Texture Based Methods - dont care if face or not, extract features from image
    • SIFT - Scale invariante feature transformation
    • LBP - Local Binary Pattern
    • LBQ - Local Phase Quantization
    • BSIF - Binarized statistical image feature

DeepFace - deep learning

Turk and Pentland
  • Created PCA

Most face Recognition algorithims take only eyes, nose, mouth region into account

In Face Recognition, image becomes vector

Let’s say the length of this vector is d

d = the dimention it is in = N x N = pixel count

Dimentionality Reduction is used in PCA
  • Face Images can be represented by a point in NxN space = the vector
  • can drop info from high dimentions to lower dimentions

PCA

Input face image = sum of many different face images (always the same faces) (eigenfaces)

Weight associated with faces (eigenfaces) changes for each input face

Finding Eigenfaces

Step 1 - Given a set of training images, comvert images into vector X1, X2, ... , XN where N is the total training images

Each vector becomes a column vector of size d by 1

Step 2 - compute the mean face mui vector

mui = Xbar of vectors

mui is a vector of d by 1 = column vector

Step 3 - Compute the mean-subtracted faces vectors

Matrix X = [X1 - mui, X2 - mui, ..., XN - mui]

Remember that Xi - mui is still a colunm vector so matrix X is d by N

this shifts the origin to the center of face vectors

Step 4 - Compute the covarience matrix of matrix X

This captures the varience of the the faces

Matrix Cov = X times XTranspose

Matrix Cov is d by d

This creates a matrix that each diagonal element is the varience of that dimention

THe off diagonal elements tell us how the points are distribted based on the 2 dimetions

so in row 2,column 10 in the matrix, it is comparing dimetion 2 with dimetion 10

Step 5 - compute the eigenvectors of Matrix Cov

Use eigs in matlab - this solves Cov x E = lambda x E

Gives you 2 values

Output = [e_1, e_2, e_3, ... , e_d] which is d by 1

where e_i is an eigenvector of 1 by d

so output is d by d

for each e_i you also get an eigen value which is a single value

the larger the eigenvalue, the greater the spread of points along that eigenvector

So relationship tends to be v_1 > v_2 > ... > v_d

All of this is to get the weight vectors for each eigenface

TO get compact space, take eigenvectors and select a new matrix e_k such that it only contains the top K eigenvectors. THe top K are determined by v_i. Higher is better

95% Rule

If you take the eigenvalues and keep adding eigenvalues until you get .95 from the equation below

(v_1 + v_2 + ... v_k) / (v_1 + v_2 + ... v_d) = .95

v_k is the value you get to 0.95 on

All of this gives us the compact space

Testing Phase

Consider 2 face images

FA and FA (both are matricies)

  1. convert them to vectors: XA and XB (both are d by 1)
  2. YA = XA - mui YB = XB - mui (both Y are d by 1)
  3. Compute the feature vector
    • WA and WB - these are the weights
    • WA = E_kTranspose (this is the top K eigenvectors, is k by d) x YA —- so WA is k by 1
    • same process for WB, also gives k by 1 matrix
  4. Compute match score
    • ScoreAB = Euclidian distance between WA and WB