SlideShare une entreprise Scribd logo
1  sur  6
Télécharger pour lire hors ligne
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 5– No.4, March 2013 – www.ijais.org
57
A Study on Face Recognition Technique based on
Eigenface
S. Ravi, Ph.D.
Department of Computer Science,
School of Engineering and Technology
Pondicherry University, Pondicherry, INDIA
Sadique Nayeem
Department of Computer Science,
School of Engineering and Technology
Pondicherry University, Pondicherry, INDIA
ABSTRACT
Artificial Face Recognition is one of the popular areas of
research in Image processing. It is different from other
biometric recognition because faces are complex,
multidimensional and almost all human faces have a similar
construction. Out of many issues some of the most important
issues associated with facial recognition are the type, format
and composition (different background, variant illumination
and different facial expression) of the face images used for the
recognition. Different approaches use the specific databases
which consist of single type, format and composition of
image. Thus, these approaches lack the robustness. So, in this
paper an comparative study is made for four different image
databases with the help of basic Eigenface PCA face
recognition technique.
General Terms
Face Recognition
Keywords
Principal Component Analysis (PCA), Eigenface.
1. INTRODUCTION
The face plays a major role in our social interaction in
conveying identity and emotion. The human ability to
recognize faces is outstanding. Human can recognize
thousands of faces learned throughout their lifetime and
identify familiar faces at a glance even after years of
separation. The ability is quite robust, despite large changes in
the visual stimulus due to viewing conditions, expression,
aging, and distractions such as glasses or changes in hairstyle.
Computational models of faces have been an active area of
research since late 1980s, for they can contribute to the
theoretical insights and also to the practical applications, such
as image and film processing, criminal identification, security
systems, and human-computer interaction, etc. However,
developing a computational model of face recognition is quite
difficult, because faces are complex, multidimensional, and
subject to change over time.
2. ARTIFICIAL FACE RECOGNITION
Generally, there are three phases for artificial face recognition
system, mainly face representation, face detection, and face
identification.
Face representation is the first step in artificial face
recognition and it deals with the representation and modeling
of faces. The representation of a face determines the
successive algorithms of detection and identification. In the
entry-level recognition, a face class should be characterized
by general properties of all the faces whereas in the
subordinate-level recognition, detailed features of eyes, nose,
and mouth have to be assigned to each individual face. There
are different approaches for face detection, which can be
classified into three categories: template-based, feature-based,
and appearance-based.
Template-matching approache is the simplest approach to
detect the complete face using a single template. Even
multiple templates may be used for each face to account for
recognition from different viewpoints. Another important
variation is done by using a set of smaller facial feature
templates that correspond to eyes, nose, and mouth, for a
single viewpoint. The advantage of template-matching is its
simplicity. But the problem with this approach is that it
requires large amount of memory and is quite unproductive in
matching. In feature-based approaches, face is represented by
the facial features, such as position and width of eyes, nose,
and mouth, thickness of eyebrows and arches face breadth, or
invariant moments. This approach has the advantage of
consuming lesser memory space and has higher recognition
and speed than the template-based approach. They can play
better role in face scale normalization and 3D head model-
based pose estimation. However, is very difficult to extract
the facial features [1]. In the appearance-based approaches
face images are projected onto a linear subspace of low
dimensions. Such a subspace is first constructed by using
principal component analysis that operates on a set of training
images, with eigenfaces as its eigenvectors. In the later stages,
the same the concept of eigenfaces was extended to
eigenfeatures, such as eigeneyes, eigenmouth, etc. for the
detection of facial features [2]. More recently, fisherface
space [3] and illumination subspace [4] have been proposed
for dealing with recognition under varying illumination.
Face detection is second step in artificial face recognition and
it deals with segregation of a face in a test image and to
isolate it from the remaining scene and background. Several
approaches have been proposed for face detection. In one of
the approach, elliptical structure of human head has been
utilized [7]. This method finds the outline of the head by using
one of the edge detection scheme viz., Canny's edge detector
and an ellipse is fit to mark the facial region in the given
image. But the problem with this method is that it is
applicable only to frontal face views. Another approach of
face detection manipulates the images in ―face space‖ [5].
Facial images are not changed when they are projected into
the face space and the projections on the non-face images
appear quite dissimilar. This is the technique used in detecting
the facial regions. For each facial region, the distance between
the local sub-images and face space is calculated. This
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 5– No.4, March 2013 – www.ijais.org
58
measure is used as an indicator of the ―faceness‖. The result
obtained by calculating the distance from the face space at
every point in the image is called as ―face map‖. Low values
signify the presence of a facial region in the given image.
Face recognition is the third step in artificial face recognition
and it is performed at the subordinate-level. Each face in the
test database is compared with each face in the training face
database and then classified as a successful recognition if
there is a match between the test image and one of the face
images in the training database. The classification process
may be affected by several factors such as scale, pose,
illumination, facial expression, and disguise.
The problem created by the scale of a face can be dealt with
rescaling process. In PCA approach, the scaling factor can be
found out by multiple attempts. The idea behind it is to use
multi-scale eigenfaces, in which a test face image is compared
with eigenfaces at a number of scales. Tthe image will appear
to be near face space of only the closest scaled eigenfaces.
The test image is scaled to multiple sizes and the scaling
factor that results in the smallest distance is used to face
space.
The different pose is caused by the change in the viewpoint or
head orientation. There are different recognition algorithms
which deal with different sensitivities to pose variation.
Recognition of faces with different illuminance conditions is
one of the challenging problems in face recognition. The same
person, with the same facial expression, and seen from the
same viewpoint, will look like a different person with
different lighting conditions. Two approaches, the fisherface
space approach [6] and the illumination subspace approach
[8], have been proposed to handle different lighting
conditions.
Another problem in face recognition is the Facial expression
of the facial image used for face recognition as there will be a
change in the face geometry and poses a greater challenge for
face recognition.
The common problem with the face recognition is the
Disguise. Human being uses different accessories to adorn the
outlook. In addition, the use of glasses, different hairstyle, and
makeup changes the appearance of a face may pose another
problem to face recognition [9]. Most of the research work so
far is going with dealing with these problems.
3. RECOGNITION USING PRINCIPLE
COMPONENT ANALYSIS (PCA)
In 1991, M. Turk and A. Pentland [5] have proposed an
method to face recognition using an information theory
approach of coding. Decoding the face images may give
insight into the information content of face images that
emphasizes the significant local and global ―features‖. Such
features may or may not be directly related to our perceptive
idea of face features such as the eyes, nose, lips, and hair.
A simple approach that is used to extract the information
contained in a face image captures the deviation in the
collection of face images which are independent of any
judgment of features, and use this information is used to
encode and compare individual face images.
The main aim of the principle component analysis method is
to find the principal components of the distribution of faces,
or the eigenvectors of the covariance matrix of the set of face
images. The eigenvectors obtained is considered as a set of
features that characterize the variation in the face images.
Each image location contributes more or less to each
eigenvector, so that the eigenvector can be displayed as a kind
of ghostly face called an eigenface. Some of the eigenface
obtained from Essex face database -'face94' are shown in
Figure 1.
Figure 1: Eigenfaces of Essex face database -'face94'
In the training data base, each face image can be represented
exactly in terms of a linear combination of the eigenfaces. The
number of possible eigenfaces is equal to the number of face
images in the training set. Even though there are eigenface,
one few will be considered for the recognition purpose. The
―best‖ eigenfaces whose eigenvalues are the larger will be
considered and rest of the eigenfaces will be thrown away.
The primary reason for using fewer eigenfaces is
computational efficiency. The most meaningful M eigenfaces
span an M-dimensional subspace—―face space‖—of all
possible images. The eigenfaces are essentially the basis
vectors of the eigenface decomposition.
A collection of face images can be approximately
reconstructed by storing a small collection of weights for each
face and a small set of standard pictures. Therefore, if a
multitude of face images can be reconstructed by weighted
sum of a small collection of characteristic images, then an
efficient way to learn and recognize faces might be built on
the characteristic features from known face images and to
recognize particular faces by comparing the feature weights
needed to (approximately) reconstruct them with the weights
associated with the known individuals.
The principle component analysis approach used for face
recognition involves the following initialization operations:
1. Acquire a set of training images.
2. Calculate the eigenfaces from the training set, keeping
only the best M images with the highest eigenvalues.
These M images define the ―face space‖. As new faces
are experienced, the eigenfaces can be updated.
3. Calculate the corresponding distribution in M-
dimensional weight space for each known individual
(training image), by projecting their face images onto the
face space.
Having initialized the system, the following steps are used to
recognize new face images:
1. Given an image to be recognized, calculate a set of
weights of the M eigenfaces by projecting it onto each
of the eigenfaces.
2. Determine if the image is a face at all by checking to
see if the image is sufficiently close to the face space.
3. If it is a face, classify the weight pattern as either a
known person or as unknown.
The procedure for calculating the eigenfaces and using it to
classify the face image has been very elaborately discussed in
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 5– No.4, March 2013 – www.ijais.org
59
the milestone work of M. Turk and A. Pentland [5]. So,
skipping the details about eigenface calculation and its use in
classification, we directly go for the implementation of
eigenface technique for face recognition.
4. IMPLEMENTATION
The Eigenface approach is implemented here to check to see it
is producing better results for four face databases that are
currently used in research, in particular face recognition..
Firstly, all the images for the training were kept in the training
set and the images for the testing were kept in the testing set.
After construction of the training set and the testing set the
following procedure was followed:
1. Create database
a. Load the training image file.
b. Construct the 2D matrix from 1D matix vector.
c. Reshape 2D image into 1D image vector.
2. Eigenface Calculation
a. Calculate the mean
b. Calculate the deviation of each image from the
mean.
c. Calculate the covariance matrix.
d. Sort and eliminate the eigenvalue which is less than
the threshold value.
e. Calculate the eigenvector of covariance matrix.
3. Recognition
a. Project the entire centered image into the face
space.
b. Read the test image.
c. Extract the PCA features from the test image.
d. Calculate Euclidean distance between the projected
test image and projection of all centered training
image.
e. Select the image with least Euclidean distance.
5. FACE RECOGNITION FIELD AND
APPLICATION
In recent time face recognition has been utilized in many field
and its application has been developed in many phase of life.
The following tabulation gives the brief idea of application of
face recognition.
Table 1. Field and Application of Face Recognition
Field Application
Security Terrorist alert, secure flight boarding
systems, stadium audience scanning,
computer security, computer application
security, database security, file encryption,
intranet security, Internet security, medical
records, secure trading terminals
Access control Border-crossing control, facility access,
vehicle access, smart kiosk and ATM,
computer access, computer program
access, computer network access, online
program access, online transactions access,
long distance learning access, online
examinations access, online database
access
Face ID Driver licenses, entitlement programs,
immigration, national ID, passports, voter
registration, welfare registration
Surveillance Advanced video surveillance, nuclear plant
surveillance, park surveillance,
neighborhood watch, power grid
surveillance, CCTV control, portal control
Smart cards Stored value security, user authentication
Law enforcement Crime stopping and suspect alert,
shoplifter recognition, suspect tracking and
investigation, suspect background check,
identifying cheats and casino undesirables,
post-event analysis, welfare fraud,
criminal face retrieval and recognition
Face databases Face indexing and retrieval, automatic face
labeling, face classification
Multimedia
management
Face-based search, face-based video
segmentation and summarization, event
detection
Human computer
interaction (HCI)
Interactive gaming, proactive computing
Others: Antique photo verification, very low bit-
rate image & video transmission, etc.
6. IMAGE DATABASE
A number of algorithms have been proposed for the face
recognition and these algorithms have been tested for different
databases. Many of these face databases are publicly-available
and contain face images with a wide variety of poses,
illumination angles, gestures, face occlusions, and illuminant
colors. But these images have not been adequately annotated
due to which their usefulness for evaluating the relative
performance of face recognition algorithms is quite limited.
For example, many of the images in existing databases are not
annotated with the exact pose angles at which they were
taken. Following are description of some of the face
databases.
6.1 Essex face database -'face94'
The images are captured from a fixed distance with different
mouth orientation. Doing so, different facial expression is
captured. The database consists of images of 153 individual of
resolution 180 by 200 pixels (in portrait format) of female and
male (20 images each). It has plain green background with no
head scale but with very minor variation in Head turn, tilt and
slant. It does not have Image lighting and individual hairstyle
variation [11]. As an example, images corresponding to one
individual are shown in the figure 2:
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 5– No.4, March 2013 – www.ijais.org
60
Fig 2: Sample images of one person from Essex face
database.
6.2 The Indian Face Database
This database contains images of 61 distinct subjects with
eleven different poses for each individual of male and female.
All the images have a bright homogenous background and the
subjects are in an upright, frontal position. For each
individual, the following pose for the faces are included:
looking front, looking left, looking right, looking up, looking
up towards left, looking up towards right, looking down. In
addition to variation in pose, images with four emotions –
neutral, smile, sad/disgust – are included for each individual.
The files are in JPEG format. The size of each color image is
640x480 pixels [12]. As an example, images corresponding to
one individual are shown in the figure 3:
Fig 3: Sample images from Indian face database.
6.3 Yale Database
The Yale Face Database contains 165 grayscale images in
GIF format of 15 individuals. There are 11 images per subject,
one per different facial expression or configuration: center-
light, with and without glasses, happy, left-light, with /without
glasses, normal, right-light, sad, sleepy, surprised, and wink
[13]. As an example, images corresponding to one individual
are shown in the figure 4:
Fig 4: Sample images from Yale database.
6.4 FACE 1999
This face image dataset was collected by Markus Weber at
California Institute of Technology. It consists of 450 frontal
face images of 25 unique people with different lighting,
expressions and backgrounds. The images are in JPEG format
and each of 896 x 592 pixels [14]. As an example, images
corresponding to one individual are shown in the figure 5:
Fig 5: Sample images from face 1999 database.
6.5 ORL Database (now, AT&T - The
Database of Faces)
These face images were taken between April 1992 and April
1994 in the lab. There are ten different images of each of 40
distinct subjects. For some subjects, the images were taken at
different times, varying the lighting, facial expressions (open /
closed eyes, smiling / not smiling) and facial details (glasses /
no glasses). All the images were taken against a dark
homogeneous background with the subjects in an upright,
frontal position (with tolerance for some side movement). The
files are in PGM format and the size of each image is 92x112
pixels, with 256 grey levels per pixel [15]. As an example,
images corresponding to one individual are shown in the
figure 6:
Fig 6: Sample images from ORL database.
6.6 AR database
This face database was created by Aleix Martinez and Robert
Benavente in the Computer Vision Center (CVC) at the
U.A.B.(http://www2.ece.ohio-
state.edu/~aleix/ARdatabase. html). It contains over
4000 color images corresponding to 126 people’s faces (70
men and 56 women). Images feature frontal view faces with
different facial expressions, illumination conditions, and
occlusions (sun glasses and scarf). No restrictions on wear
(clothes, glasses, etc.), make-up, hair style, etc. were imposed
to participants. We select 1300 images of 50 males and 50
females, and each person has 13 images to test our method.
The original images are of 768 by 576 pixels [16]. As an
example, images corresponding to one individual are shown
in the figure 7:
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 5– No.4, March 2013 – www.ijais.org
61
Fig 7: Sample images from AR database.
6.7 UMIST
It was created by Daniel B. Graham, with a purpose of
collecting a controlled set of images that vary pose uniformly
from frontal to side view and it consists of 564 grayscale
images of 20 people of both sexes and various races. (Image
size is about 220 x 220 pixels with 256-bit grey-scale.)
Various pose angles of each person are provided, ranging
from profile to frontal views [17]. It is now know as Sheffield.
As an example, images corresponding to one individual are
shown in the figure 8:
Fig 8: Sample images from UMIST database.
7. RESULT
In this paper, the basic PCA face recognition technique has
been implemented and tested with four different databases
whose description is given in the table 2:
Table 2: Database used in experiment
Name of
database
Image
format
Image size Image
type
IFD jpg 110X75 Color
Face94 jpg 90X100 Color
Yale gif 320X243 Grey
Face 1999 jpg 300X198 Color
The experiment was carried out using MATLAB R2008a with
different number of samples kept in the training set. The test
images were kept in the testing set. Both sets were loaded and
test image was provided for the recognition. The experimental
result is given in the table 3:
Table 3: Experimental result
Eigenface face recognition with different sample images
Name
of
databa
se
Total
No.
of
uniq
ue
perso
n
No. of
sampl
es of
each
image
in
traini
ng set
No. of
image
in
traini
ng set
No. of
False
recogniti
on
Accura
cy rate
(%)
IFD 60 1 60 31 49.18
2 120 25 59.01
3 180 16 73.77
4 240 16 73.77
5 300 12 80.32
6 360 8 86.88
7 420 3 95.08
8 480 2 96.72
9 540 1 98.36
10 600 1 98.36
11 660 1 98.36
Face94 152 1 152 47 69.07
2 304 29 80.92
3 456 12 92.10
4 608 11 92.76
5 760 11 92.76
6 912 10 93.42
7 1064 10 93.42
8 1216 9 94.07
9 1368 8 94.73
10 1520 8 94.73
11 1672 6 96.05
Yale 15 1 15 8 46.66
2 30 2 86.66
3 45 3 80.00
4 60 3 80.00
5 75 2 86.66
6 90 1 93.33
7 105 2 86.66
8 120 1 93.33
9 135 1 93.33
10 150 1 93.33
11 165 1 93.33
Face
1999
26 1 26 17 34.61
2 52 15 42.30
3 78 14 46.15
4 104 9 65.38
5 130 9 65.38
6 156 8 69.23
7 182 5 80.76
8 208 5 80.76
9 234 3 88.46
10 260 2 92.30
11 286 1 96.15
Thus the result shows that, the recognition rate is quite low in
the case of one image per individual in the training set. But as
the number of sample is increased, the recognition rate also
gets increased and for almost all databases the recognition rate
is best for around ten images per individual in the training set.
In case of more than ten images per individual in the training
set, time required for the recognition increases too much
which cannot be accepted for the real time application. As far
International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868
Foundation of Computer Science FCS, New York, USA
Volume 5– No.4, March 2013 – www.ijais.org
62
as image format, image size and image type is concerned; the
eigenface technique can work in all cases.
Figure 9: Recognition Accuracy
8. CONCLUSION
In this paper, Eigenface PCA face recognition technique has
been implemented and examined with four different face
databases. The experimented result shows that around ten
images per person gives better result for face recognition. But
in real time scenario, collecting and storing these images for
training set are not easy task. So, there is an urgent need to
work and get high accuracy recognition rate for single image
per person problem.
9. REFERENCES
[1] R. Chellappa, C. L. Wilson, and S. Sirohey, 1995,
Human and machine recognition of faces: A survey,
Proc. of IEEE, volume 83, pages 705-740.
[2]. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman,
1997, Eigenfaces vs. fisherfaces: Recognition using class
specific linear projection, IEEE Trans. Pattern Analysis
and Machine Intelligence, 19(7):711-720.
[3]. A. S. Georghiades, D. J. Kriegman, and P. N.
Belhumeur, 1998, Illumination cones for recognition
under variable lighting: Faces, Proc. IEEE Conf. on
Computer Vision and Pattern Recognition, pages 52-59.
[4] S. A. Sirohey, 1993, Human face segmentation and
identification, Technical Report CAR-TR-695, Center
for Automation Research, University of Maryland,
College Park, MD.
[5] M. Turk and A. Pentland, 1991, Eigenfaces for
recognition, Journal of Cognitive Neuroscience, vol.3,
No.1
[6] Sirovich, L. and Kirby M., Low dimensional procedure
for the characterization of human faces, Journal of the
Optical Society of America A, 4(3), 519-524.
[7] A. Samal and P. A. Iyengar, 1992, Automatic recognition
and analysis of human faces and facial expressions: A
survey, Pattern Recognition, 25(1): 65-77.
[8] Athinodoros S. Georghiades, David J. Kriegman and
Peter N. Belhumeur, Illumination cones for recognition
under variable lighting: Faces, Center for Computational
Vision and Control.
[9] Caifeng Shan, Shaogang Gong, Peter W. McOwan, 2009,
Facial expression recognition based on Local Binary
Patterns: A Comparative Study, Image and Vision
Computing 27, 803-816.
[10] A. Pentland and T. Choudhury, Feb. 2000, Face
recognition for smart environments, Computer, Vol.33
Iss.2.
[11] Essex face database -face94, University of Essex, UK,
http://cswww.essex.ac.uk/mv/allfaces/index.html
[12] Vidit Jain, Amitabha Mukherjee, 2002, The Indian Face
Database, http://vis-
www.cs.umass.edu/~vidit/IndianFaceDatabase/
[13] Yale Database,
http://cvc.yale.edu/projects/yalefaces/yalefaces.html
[14] FACE 1999, http://www.vision.caltech.edu/html-
files/archive.html
[15] ORL Database,
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedata
base.html
[16] AR database,
http://rvl1.ecn.purdue.edu/~aleix/aleix_face_DB.html
[17] UMIST Face Database,
http://www.sheffield.ac.uk/eee/research/iel/research/face
Dr. S. Ravi received his B.E degree in Computer Technology
and Informatics from Institute of Road and Transport
Technology (IRTT), Erode affiliated to Bharathiar University,
Coimbatore, India in 1989, M.Tech degree in Computer and
Information Technology from Centre for Information
Technology and Engineering, Manonmaniam Sundaranar
University, Tirunelveli, in 2004 and Ph.D degree in Computer
and Information Technology from Centre for Information
Technology and Engineering, Manonmaniam Sundaranar
University, Tirunelveli, in 2010.
He has worked as an Associate Professor in the Department of
Computer Science, Cardamom Planters’ Association College,
Bodinayakanur from 1990. He has worked as an Associate
Professor in Centre for Information Technology and
Engineering of Manonmaniam Sundaranar University,
Tirunelveli from 2008. He is now working as Assistant
Professor in the Department of Computer Science, School of
Engineering and Technology, Pondicherry University,
Pondicherry from 2011 to till date. His research interests
include Digital Image Processing, Face Detection and Face
Recognition, Machine Vision and Pattern Recognition. He has
published 47 research papers (22 - International Journals, 14 -
IEEE International Conferences, 10 - International
Conferences and 4 - National Conferences,). He is a member
of the IEEE from 2006.
Sadique Nayeem has received his Bachelor of Technology
Degree in Computer Sc. & Engineering from Biju Patnaik
University of Technology (BPUT), Rourkela, India in 2009.
He is currently pursuing his Master of Technology degree in
Computer Science and Engineering in Department of
Computer Science from the School of Engineering and
Technology, Pondicherry University, Puducherry, India. . His
research interests include Face recognition, Genetic
Algorithm and Holographic Memory.
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11
IFD
Face94
Yale
Face 1999
Number of samples
RecognitionAccuracy(%)

Contenu connexe

Tendances

Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...inventionjournals
 
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...Win Yu
 
L008.Eigenfaces And Nn Som
L008.Eigenfaces And Nn SomL008.Eigenfaces And Nn Som
L008.Eigenfaces And Nn Somramesh kumar
 
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodIRJET Journal
 
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesImplementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
 
Facial emotion recognition
Facial emotion recognitionFacial emotion recognition
Facial emotion recognitionRahin Patel
 
Reconstruction of partially damaged facial image
Reconstruction of partially damaged facial imageReconstruction of partially damaged facial image
Reconstruction of partially damaged facial imageeSAT Publishing House
 
A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...IJMER
 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Editor IJARCET
 
Image Redundancy and Its Elimination
Image Redundancy and Its EliminationImage Redundancy and Its Elimination
Image Redundancy and Its EliminationIJMERJOURNAL
 
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...Waqas Tariq
 
Face recognition using laplacian faces
Face recognition using laplacian facesFace recognition using laplacian faces
Face recognition using laplacian facesPulkiŧ Sharma
 
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...CSCJournals
 
Facial Emotion Recognition: A Deep Learning approach
Facial Emotion Recognition: A Deep Learning approachFacial Emotion Recognition: A Deep Learning approach
Facial Emotion Recognition: A Deep Learning approachAshwinRachha
 
Happiness Expression Recognition at Different Age Conditions
Happiness Expression Recognition at Different Age ConditionsHappiness Expression Recognition at Different Age Conditions
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
 
Face recognition using laplacianfaces (synopsis)
Face recognition using laplacianfaces (synopsis)Face recognition using laplacianfaces (synopsis)
Face recognition using laplacianfaces (synopsis)Mumbai Academisc
 
A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...
A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...
A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...ijfcstjournal
 

Tendances (19)

Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
 
J01116164
J01116164J01116164
J01116164
 
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
 
L008.Eigenfaces And Nn Som
L008.Eigenfaces And Nn SomL008.Eigenfaces And Nn Som
L008.Eigenfaces And Nn Som
 
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
 
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesImplementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
 
A017530114
A017530114A017530114
A017530114
 
Facial emotion recognition
Facial emotion recognitionFacial emotion recognition
Facial emotion recognition
 
Reconstruction of partially damaged facial image
Reconstruction of partially damaged facial imageReconstruction of partially damaged facial image
Reconstruction of partially damaged facial image
 
A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...
 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113
 
Image Redundancy and Its Elimination
Image Redundancy and Its EliminationImage Redundancy and Its Elimination
Image Redundancy and Its Elimination
 
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
 
Face recognition using laplacian faces
Face recognition using laplacian facesFace recognition using laplacian faces
Face recognition using laplacian faces
 
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
 
Facial Emotion Recognition: A Deep Learning approach
Facial Emotion Recognition: A Deep Learning approachFacial Emotion Recognition: A Deep Learning approach
Facial Emotion Recognition: A Deep Learning approach
 
Happiness Expression Recognition at Different Age Conditions
Happiness Expression Recognition at Different Age ConditionsHappiness Expression Recognition at Different Age Conditions
Happiness Expression Recognition at Different Age Conditions
 
Face recognition using laplacianfaces (synopsis)
Face recognition using laplacianfaces (synopsis)Face recognition using laplacianfaces (synopsis)
Face recognition using laplacianfaces (synopsis)
 
A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...
A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...
A FACE RECOGNITION USING LINEAR-DIAGONAL BINARY GRAPH PATTERN FEATURE EXTRACT...
 

Similaire à A Study on Face Recognition Technique based on Eigenface

IRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face RecognitionIRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face RecognitionIRJET Journal
 
DETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGESDETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGESJournal For Research
 
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression RecognitionFiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression Recognitionijtsrd
 
A survey on face recognition techniques
A survey on face recognition techniquesA survey on face recognition techniques
A survey on face recognition techniquesAlexander Decker
 
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET Journal
 
Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Editor IJARCET
 
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...Sana Nasar
 
Facial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarkseSAT Journals
 
Facial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarkseSAT Journals
 
Comparative Studies for the Human Facial Expressions Recognition Techniques
Comparative Studies for the Human Facial Expressions Recognition TechniquesComparative Studies for the Human Facial Expressions Recognition Techniques
Comparative Studies for the Human Facial Expressions Recognition Techniquesijtsrd
 
IRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET Journal
 
IRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET Journal
 
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...IJERA Editor
 

Similaire à A Study on Face Recognition Technique based on Eigenface (20)

Fl33971979
Fl33971979Fl33971979
Fl33971979
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
IRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face RecognitionIRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face Recognition
 
50120140504002
5012014050400250120140504002
50120140504002
 
DETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGESDETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGES
 
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression RecognitionFiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
 
Independent Research
Independent ResearchIndependent Research
Independent Research
 
20320130406016
2032013040601620320130406016
20320130406016
 
A survey on face recognition techniques
A survey on face recognition techniquesA survey on face recognition techniques
A survey on face recognition techniques
 
Cu31632635
Cu31632635Cu31632635
Cu31632635
 
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
 
Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356
 
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
 
Facial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarks
 
Facial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarks
 
Comparative Studies for the Human Facial Expressions Recognition Techniques
Comparative Studies for the Human Facial Expressions Recognition TechniquesComparative Studies for the Human Facial Expressions Recognition Techniques
Comparative Studies for the Human Facial Expressions Recognition Techniques
 
IRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection System
 
IRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection System
 
Ijetcas14 435
Ijetcas14 435Ijetcas14 435
Ijetcas14 435
 
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
 

Plus de sadique_ghitm

Organizational Behaviour
Organizational BehaviourOrganizational Behaviour
Organizational Behavioursadique_ghitm
 
Digital India Initiative
Digital India Initiative Digital India Initiative
Digital India Initiative sadique_ghitm
 
Pumping lemma for regular language
Pumping lemma for regular languagePumping lemma for regular language
Pumping lemma for regular languagesadique_ghitm
 
Entity Relationship Diagrams
Entity Relationship DiagramsEntity Relationship Diagrams
Entity Relationship Diagramssadique_ghitm
 
Data Flow Diagram (DFD)
Data Flow Diagram (DFD)Data Flow Diagram (DFD)
Data Flow Diagram (DFD)sadique_ghitm
 
Detecting HTTP Botnet using Artificial Immune System (AIS)
Detecting HTTP Botnet using Artificial Immune System (AIS)Detecting HTTP Botnet using Artificial Immune System (AIS)
Detecting HTTP Botnet using Artificial Immune System (AIS)sadique_ghitm
 
Handling of Incident, Challenges, Risks, Vulnerability and Implementing Detec...
Handling of Incident, Challenges, Risks, Vulnerability and Implementing Detec...Handling of Incident, Challenges, Risks, Vulnerability and Implementing Detec...
Handling of Incident, Challenges, Risks, Vulnerability and Implementing Detec...sadique_ghitm
 
Study and Analysis of Novel Face Recognition Techniques using PCA, LDA and Ge...
Study and Analysis of Novel Face Recognition Techniques using PCA, LDA and Ge...Study and Analysis of Novel Face Recognition Techniques using PCA, LDA and Ge...
Study and Analysis of Novel Face Recognition Techniques using PCA, LDA and Ge...sadique_ghitm
 
Face recognition: A Comparison of Appearance Based Approaches
Face recognition: A Comparison of Appearance Based ApproachesFace recognition: A Comparison of Appearance Based Approaches
Face recognition: A Comparison of Appearance Based Approachessadique_ghitm
 
Design and analysis of a mobile file sharing system for opportunistic networks
Design and analysis of a mobile file sharing system for opportunistic networksDesign and analysis of a mobile file sharing system for opportunistic networks
Design and analysis of a mobile file sharing system for opportunistic networkssadique_ghitm
 
A hybrid genetic algorithm and chaotic function model for image encryption
A hybrid genetic algorithm and chaotic function model for image encryptionA hybrid genetic algorithm and chaotic function model for image encryption
A hybrid genetic algorithm and chaotic function model for image encryptionsadique_ghitm
 
A controlled experiment in assessing and estimating software maintenance tasks
A controlled experiment in assessing and estimating software maintenance tasks A controlled experiment in assessing and estimating software maintenance tasks
A controlled experiment in assessing and estimating software maintenance tasks sadique_ghitm
 

Plus de sadique_ghitm (16)

Attitude
AttitudeAttitude
Attitude
 
Personality
PersonalityPersonality
Personality
 
Organizational Behaviour
Organizational BehaviourOrganizational Behaviour
Organizational Behaviour
 
Digital India Initiative
Digital India Initiative Digital India Initiative
Digital India Initiative
 
Pumping lemma for regular language
Pumping lemma for regular languagePumping lemma for regular language
Pumping lemma for regular language
 
Entity Relationship Diagrams
Entity Relationship DiagramsEntity Relationship Diagrams
Entity Relationship Diagrams
 
Data Flow Diagram (DFD)
Data Flow Diagram (DFD)Data Flow Diagram (DFD)
Data Flow Diagram (DFD)
 
Detecting HTTP Botnet using Artificial Immune System (AIS)
Detecting HTTP Botnet using Artificial Immune System (AIS)Detecting HTTP Botnet using Artificial Immune System (AIS)
Detecting HTTP Botnet using Artificial Immune System (AIS)
 
Handling of Incident, Challenges, Risks, Vulnerability and Implementing Detec...
Handling of Incident, Challenges, Risks, Vulnerability and Implementing Detec...Handling of Incident, Challenges, Risks, Vulnerability and Implementing Detec...
Handling of Incident, Challenges, Risks, Vulnerability and Implementing Detec...
 
Study and Analysis of Novel Face Recognition Techniques using PCA, LDA and Ge...
Study and Analysis of Novel Face Recognition Techniques using PCA, LDA and Ge...Study and Analysis of Novel Face Recognition Techniques using PCA, LDA and Ge...
Study and Analysis of Novel Face Recognition Techniques using PCA, LDA and Ge...
 
Computer Worms
Computer WormsComputer Worms
Computer Worms
 
Face recognition: A Comparison of Appearance Based Approaches
Face recognition: A Comparison of Appearance Based ApproachesFace recognition: A Comparison of Appearance Based Approaches
Face recognition: A Comparison of Appearance Based Approaches
 
Design and analysis of a mobile file sharing system for opportunistic networks
Design and analysis of a mobile file sharing system for opportunistic networksDesign and analysis of a mobile file sharing system for opportunistic networks
Design and analysis of a mobile file sharing system for opportunistic networks
 
A hybrid genetic algorithm and chaotic function model for image encryption
A hybrid genetic algorithm and chaotic function model for image encryptionA hybrid genetic algorithm and chaotic function model for image encryption
A hybrid genetic algorithm and chaotic function model for image encryption
 
A controlled experiment in assessing and estimating software maintenance tasks
A controlled experiment in assessing and estimating software maintenance tasks A controlled experiment in assessing and estimating software maintenance tasks
A controlled experiment in assessing and estimating software maintenance tasks
 
Holographic Memory
Holographic MemoryHolographic Memory
Holographic Memory
 

Dernier

Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfChris Hunter
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...KokoStevan
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.MateoGardella
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docxPoojaSen20
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxVishalSingh1417
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin ClassesCeline George
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.pptRamjanShidvankar
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxnegromaestrong
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterMateoGardella
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 

Dernier (20)

Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 

A Study on Face Recognition Technique based on Eigenface

  • 1. International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Foundation of Computer Science FCS, New York, USA Volume 5– No.4, March 2013 – www.ijais.org 57 A Study on Face Recognition Technique based on Eigenface S. Ravi, Ph.D. Department of Computer Science, School of Engineering and Technology Pondicherry University, Pondicherry, INDIA Sadique Nayeem Department of Computer Science, School of Engineering and Technology Pondicherry University, Pondicherry, INDIA ABSTRACT Artificial Face Recognition is one of the popular areas of research in Image processing. It is different from other biometric recognition because faces are complex, multidimensional and almost all human faces have a similar construction. Out of many issues some of the most important issues associated with facial recognition are the type, format and composition (different background, variant illumination and different facial expression) of the face images used for the recognition. Different approaches use the specific databases which consist of single type, format and composition of image. Thus, these approaches lack the robustness. So, in this paper an comparative study is made for four different image databases with the help of basic Eigenface PCA face recognition technique. General Terms Face Recognition Keywords Principal Component Analysis (PCA), Eigenface. 1. INTRODUCTION The face plays a major role in our social interaction in conveying identity and emotion. The human ability to recognize faces is outstanding. Human can recognize thousands of faces learned throughout their lifetime and identify familiar faces at a glance even after years of separation. The ability is quite robust, despite large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses or changes in hairstyle. Computational models of faces have been an active area of research since late 1980s, for they can contribute to the theoretical insights and also to the practical applications, such as image and film processing, criminal identification, security systems, and human-computer interaction, etc. However, developing a computational model of face recognition is quite difficult, because faces are complex, multidimensional, and subject to change over time. 2. ARTIFICIAL FACE RECOGNITION Generally, there are three phases for artificial face recognition system, mainly face representation, face detection, and face identification. Face representation is the first step in artificial face recognition and it deals with the representation and modeling of faces. The representation of a face determines the successive algorithms of detection and identification. In the entry-level recognition, a face class should be characterized by general properties of all the faces whereas in the subordinate-level recognition, detailed features of eyes, nose, and mouth have to be assigned to each individual face. There are different approaches for face detection, which can be classified into three categories: template-based, feature-based, and appearance-based. Template-matching approache is the simplest approach to detect the complete face using a single template. Even multiple templates may be used for each face to account for recognition from different viewpoints. Another important variation is done by using a set of smaller facial feature templates that correspond to eyes, nose, and mouth, for a single viewpoint. The advantage of template-matching is its simplicity. But the problem with this approach is that it requires large amount of memory and is quite unproductive in matching. In feature-based approaches, face is represented by the facial features, such as position and width of eyes, nose, and mouth, thickness of eyebrows and arches face breadth, or invariant moments. This approach has the advantage of consuming lesser memory space and has higher recognition and speed than the template-based approach. They can play better role in face scale normalization and 3D head model- based pose estimation. However, is very difficult to extract the facial features [1]. In the appearance-based approaches face images are projected onto a linear subspace of low dimensions. Such a subspace is first constructed by using principal component analysis that operates on a set of training images, with eigenfaces as its eigenvectors. In the later stages, the same the concept of eigenfaces was extended to eigenfeatures, such as eigeneyes, eigenmouth, etc. for the detection of facial features [2]. More recently, fisherface space [3] and illumination subspace [4] have been proposed for dealing with recognition under varying illumination. Face detection is second step in artificial face recognition and it deals with segregation of a face in a test image and to isolate it from the remaining scene and background. Several approaches have been proposed for face detection. In one of the approach, elliptical structure of human head has been utilized [7]. This method finds the outline of the head by using one of the edge detection scheme viz., Canny's edge detector and an ellipse is fit to mark the facial region in the given image. But the problem with this method is that it is applicable only to frontal face views. Another approach of face detection manipulates the images in ―face space‖ [5]. Facial images are not changed when they are projected into the face space and the projections on the non-face images appear quite dissimilar. This is the technique used in detecting the facial regions. For each facial region, the distance between the local sub-images and face space is calculated. This
  • 2. International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Foundation of Computer Science FCS, New York, USA Volume 5– No.4, March 2013 – www.ijais.org 58 measure is used as an indicator of the ―faceness‖. The result obtained by calculating the distance from the face space at every point in the image is called as ―face map‖. Low values signify the presence of a facial region in the given image. Face recognition is the third step in artificial face recognition and it is performed at the subordinate-level. Each face in the test database is compared with each face in the training face database and then classified as a successful recognition if there is a match between the test image and one of the face images in the training database. The classification process may be affected by several factors such as scale, pose, illumination, facial expression, and disguise. The problem created by the scale of a face can be dealt with rescaling process. In PCA approach, the scaling factor can be found out by multiple attempts. The idea behind it is to use multi-scale eigenfaces, in which a test face image is compared with eigenfaces at a number of scales. Tthe image will appear to be near face space of only the closest scaled eigenfaces. The test image is scaled to multiple sizes and the scaling factor that results in the smallest distance is used to face space. The different pose is caused by the change in the viewpoint or head orientation. There are different recognition algorithms which deal with different sensitivities to pose variation. Recognition of faces with different illuminance conditions is one of the challenging problems in face recognition. The same person, with the same facial expression, and seen from the same viewpoint, will look like a different person with different lighting conditions. Two approaches, the fisherface space approach [6] and the illumination subspace approach [8], have been proposed to handle different lighting conditions. Another problem in face recognition is the Facial expression of the facial image used for face recognition as there will be a change in the face geometry and poses a greater challenge for face recognition. The common problem with the face recognition is the Disguise. Human being uses different accessories to adorn the outlook. In addition, the use of glasses, different hairstyle, and makeup changes the appearance of a face may pose another problem to face recognition [9]. Most of the research work so far is going with dealing with these problems. 3. RECOGNITION USING PRINCIPLE COMPONENT ANALYSIS (PCA) In 1991, M. Turk and A. Pentland [5] have proposed an method to face recognition using an information theory approach of coding. Decoding the face images may give insight into the information content of face images that emphasizes the significant local and global ―features‖. Such features may or may not be directly related to our perceptive idea of face features such as the eyes, nose, lips, and hair. A simple approach that is used to extract the information contained in a face image captures the deviation in the collection of face images which are independent of any judgment of features, and use this information is used to encode and compare individual face images. The main aim of the principle component analysis method is to find the principal components of the distribution of faces, or the eigenvectors of the covariance matrix of the set of face images. The eigenvectors obtained is considered as a set of features that characterize the variation in the face images. Each image location contributes more or less to each eigenvector, so that the eigenvector can be displayed as a kind of ghostly face called an eigenface. Some of the eigenface obtained from Essex face database -'face94' are shown in Figure 1. Figure 1: Eigenfaces of Essex face database -'face94' In the training data base, each face image can be represented exactly in terms of a linear combination of the eigenfaces. The number of possible eigenfaces is equal to the number of face images in the training set. Even though there are eigenface, one few will be considered for the recognition purpose. The ―best‖ eigenfaces whose eigenvalues are the larger will be considered and rest of the eigenfaces will be thrown away. The primary reason for using fewer eigenfaces is computational efficiency. The most meaningful M eigenfaces span an M-dimensional subspace—―face space‖—of all possible images. The eigenfaces are essentially the basis vectors of the eigenface decomposition. A collection of face images can be approximately reconstructed by storing a small collection of weights for each face and a small set of standard pictures. Therefore, if a multitude of face images can be reconstructed by weighted sum of a small collection of characteristic images, then an efficient way to learn and recognize faces might be built on the characteristic features from known face images and to recognize particular faces by comparing the feature weights needed to (approximately) reconstruct them with the weights associated with the known individuals. The principle component analysis approach used for face recognition involves the following initialization operations: 1. Acquire a set of training images. 2. Calculate the eigenfaces from the training set, keeping only the best M images with the highest eigenvalues. These M images define the ―face space‖. As new faces are experienced, the eigenfaces can be updated. 3. Calculate the corresponding distribution in M- dimensional weight space for each known individual (training image), by projecting their face images onto the face space. Having initialized the system, the following steps are used to recognize new face images: 1. Given an image to be recognized, calculate a set of weights of the M eigenfaces by projecting it onto each of the eigenfaces. 2. Determine if the image is a face at all by checking to see if the image is sufficiently close to the face space. 3. If it is a face, classify the weight pattern as either a known person or as unknown. The procedure for calculating the eigenfaces and using it to classify the face image has been very elaborately discussed in
  • 3. International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Foundation of Computer Science FCS, New York, USA Volume 5– No.4, March 2013 – www.ijais.org 59 the milestone work of M. Turk and A. Pentland [5]. So, skipping the details about eigenface calculation and its use in classification, we directly go for the implementation of eigenface technique for face recognition. 4. IMPLEMENTATION The Eigenface approach is implemented here to check to see it is producing better results for four face databases that are currently used in research, in particular face recognition.. Firstly, all the images for the training were kept in the training set and the images for the testing were kept in the testing set. After construction of the training set and the testing set the following procedure was followed: 1. Create database a. Load the training image file. b. Construct the 2D matrix from 1D matix vector. c. Reshape 2D image into 1D image vector. 2. Eigenface Calculation a. Calculate the mean b. Calculate the deviation of each image from the mean. c. Calculate the covariance matrix. d. Sort and eliminate the eigenvalue which is less than the threshold value. e. Calculate the eigenvector of covariance matrix. 3. Recognition a. Project the entire centered image into the face space. b. Read the test image. c. Extract the PCA features from the test image. d. Calculate Euclidean distance between the projected test image and projection of all centered training image. e. Select the image with least Euclidean distance. 5. FACE RECOGNITION FIELD AND APPLICATION In recent time face recognition has been utilized in many field and its application has been developed in many phase of life. The following tabulation gives the brief idea of application of face recognition. Table 1. Field and Application of Face Recognition Field Application Security Terrorist alert, secure flight boarding systems, stadium audience scanning, computer security, computer application security, database security, file encryption, intranet security, Internet security, medical records, secure trading terminals Access control Border-crossing control, facility access, vehicle access, smart kiosk and ATM, computer access, computer program access, computer network access, online program access, online transactions access, long distance learning access, online examinations access, online database access Face ID Driver licenses, entitlement programs, immigration, national ID, passports, voter registration, welfare registration Surveillance Advanced video surveillance, nuclear plant surveillance, park surveillance, neighborhood watch, power grid surveillance, CCTV control, portal control Smart cards Stored value security, user authentication Law enforcement Crime stopping and suspect alert, shoplifter recognition, suspect tracking and investigation, suspect background check, identifying cheats and casino undesirables, post-event analysis, welfare fraud, criminal face retrieval and recognition Face databases Face indexing and retrieval, automatic face labeling, face classification Multimedia management Face-based search, face-based video segmentation and summarization, event detection Human computer interaction (HCI) Interactive gaming, proactive computing Others: Antique photo verification, very low bit- rate image & video transmission, etc. 6. IMAGE DATABASE A number of algorithms have been proposed for the face recognition and these algorithms have been tested for different databases. Many of these face databases are publicly-available and contain face images with a wide variety of poses, illumination angles, gestures, face occlusions, and illuminant colors. But these images have not been adequately annotated due to which their usefulness for evaluating the relative performance of face recognition algorithms is quite limited. For example, many of the images in existing databases are not annotated with the exact pose angles at which they were taken. Following are description of some of the face databases. 6.1 Essex face database -'face94' The images are captured from a fixed distance with different mouth orientation. Doing so, different facial expression is captured. The database consists of images of 153 individual of resolution 180 by 200 pixels (in portrait format) of female and male (20 images each). It has plain green background with no head scale but with very minor variation in Head turn, tilt and slant. It does not have Image lighting and individual hairstyle variation [11]. As an example, images corresponding to one individual are shown in the figure 2:
  • 4. International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Foundation of Computer Science FCS, New York, USA Volume 5– No.4, March 2013 – www.ijais.org 60 Fig 2: Sample images of one person from Essex face database. 6.2 The Indian Face Database This database contains images of 61 distinct subjects with eleven different poses for each individual of male and female. All the images have a bright homogenous background and the subjects are in an upright, frontal position. For each individual, the following pose for the faces are included: looking front, looking left, looking right, looking up, looking up towards left, looking up towards right, looking down. In addition to variation in pose, images with four emotions – neutral, smile, sad/disgust – are included for each individual. The files are in JPEG format. The size of each color image is 640x480 pixels [12]. As an example, images corresponding to one individual are shown in the figure 3: Fig 3: Sample images from Indian face database. 6.3 Yale Database The Yale Face Database contains 165 grayscale images in GIF format of 15 individuals. There are 11 images per subject, one per different facial expression or configuration: center- light, with and without glasses, happy, left-light, with /without glasses, normal, right-light, sad, sleepy, surprised, and wink [13]. As an example, images corresponding to one individual are shown in the figure 4: Fig 4: Sample images from Yale database. 6.4 FACE 1999 This face image dataset was collected by Markus Weber at California Institute of Technology. It consists of 450 frontal face images of 25 unique people with different lighting, expressions and backgrounds. The images are in JPEG format and each of 896 x 592 pixels [14]. As an example, images corresponding to one individual are shown in the figure 5: Fig 5: Sample images from face 1999 database. 6.5 ORL Database (now, AT&T - The Database of Faces) These face images were taken between April 1992 and April 1994 in the lab. There are ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement). The files are in PGM format and the size of each image is 92x112 pixels, with 256 grey levels per pixel [15]. As an example, images corresponding to one individual are shown in the figure 6: Fig 6: Sample images from ORL database. 6.6 AR database This face database was created by Aleix Martinez and Robert Benavente in the Computer Vision Center (CVC) at the U.A.B.(http://www2.ece.ohio- state.edu/~aleix/ARdatabase. html). It contains over 4000 color images corresponding to 126 people’s faces (70 men and 56 women). Images feature frontal view faces with different facial expressions, illumination conditions, and occlusions (sun glasses and scarf). No restrictions on wear (clothes, glasses, etc.), make-up, hair style, etc. were imposed to participants. We select 1300 images of 50 males and 50 females, and each person has 13 images to test our method. The original images are of 768 by 576 pixels [16]. As an example, images corresponding to one individual are shown in the figure 7:
  • 5. International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Foundation of Computer Science FCS, New York, USA Volume 5– No.4, March 2013 – www.ijais.org 61 Fig 7: Sample images from AR database. 6.7 UMIST It was created by Daniel B. Graham, with a purpose of collecting a controlled set of images that vary pose uniformly from frontal to side view and it consists of 564 grayscale images of 20 people of both sexes and various races. (Image size is about 220 x 220 pixels with 256-bit grey-scale.) Various pose angles of each person are provided, ranging from profile to frontal views [17]. It is now know as Sheffield. As an example, images corresponding to one individual are shown in the figure 8: Fig 8: Sample images from UMIST database. 7. RESULT In this paper, the basic PCA face recognition technique has been implemented and tested with four different databases whose description is given in the table 2: Table 2: Database used in experiment Name of database Image format Image size Image type IFD jpg 110X75 Color Face94 jpg 90X100 Color Yale gif 320X243 Grey Face 1999 jpg 300X198 Color The experiment was carried out using MATLAB R2008a with different number of samples kept in the training set. The test images were kept in the testing set. Both sets were loaded and test image was provided for the recognition. The experimental result is given in the table 3: Table 3: Experimental result Eigenface face recognition with different sample images Name of databa se Total No. of uniq ue perso n No. of sampl es of each image in traini ng set No. of image in traini ng set No. of False recogniti on Accura cy rate (%) IFD 60 1 60 31 49.18 2 120 25 59.01 3 180 16 73.77 4 240 16 73.77 5 300 12 80.32 6 360 8 86.88 7 420 3 95.08 8 480 2 96.72 9 540 1 98.36 10 600 1 98.36 11 660 1 98.36 Face94 152 1 152 47 69.07 2 304 29 80.92 3 456 12 92.10 4 608 11 92.76 5 760 11 92.76 6 912 10 93.42 7 1064 10 93.42 8 1216 9 94.07 9 1368 8 94.73 10 1520 8 94.73 11 1672 6 96.05 Yale 15 1 15 8 46.66 2 30 2 86.66 3 45 3 80.00 4 60 3 80.00 5 75 2 86.66 6 90 1 93.33 7 105 2 86.66 8 120 1 93.33 9 135 1 93.33 10 150 1 93.33 11 165 1 93.33 Face 1999 26 1 26 17 34.61 2 52 15 42.30 3 78 14 46.15 4 104 9 65.38 5 130 9 65.38 6 156 8 69.23 7 182 5 80.76 8 208 5 80.76 9 234 3 88.46 10 260 2 92.30 11 286 1 96.15 Thus the result shows that, the recognition rate is quite low in the case of one image per individual in the training set. But as the number of sample is increased, the recognition rate also gets increased and for almost all databases the recognition rate is best for around ten images per individual in the training set. In case of more than ten images per individual in the training set, time required for the recognition increases too much which cannot be accepted for the real time application. As far
  • 6. International Journal of Applied Information Systems (IJAIS) – ISSN : 2249-0868 Foundation of Computer Science FCS, New York, USA Volume 5– No.4, March 2013 – www.ijais.org 62 as image format, image size and image type is concerned; the eigenface technique can work in all cases. Figure 9: Recognition Accuracy 8. CONCLUSION In this paper, Eigenface PCA face recognition technique has been implemented and examined with four different face databases. The experimented result shows that around ten images per person gives better result for face recognition. But in real time scenario, collecting and storing these images for training set are not easy task. So, there is an urgent need to work and get high accuracy recognition rate for single image per person problem. 9. REFERENCES [1] R. Chellappa, C. L. Wilson, and S. Sirohey, 1995, Human and machine recognition of faces: A survey, Proc. of IEEE, volume 83, pages 705-740. [2]. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, 1997, Eigenfaces vs. fisherfaces: Recognition using class specific linear projection, IEEE Trans. Pattern Analysis and Machine Intelligence, 19(7):711-720. [3]. A. S. Georghiades, D. J. Kriegman, and P. N. Belhumeur, 1998, Illumination cones for recognition under variable lighting: Faces, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pages 52-59. [4] S. A. Sirohey, 1993, Human face segmentation and identification, Technical Report CAR-TR-695, Center for Automation Research, University of Maryland, College Park, MD. [5] M. Turk and A. Pentland, 1991, Eigenfaces for recognition, Journal of Cognitive Neuroscience, vol.3, No.1 [6] Sirovich, L. and Kirby M., Low dimensional procedure for the characterization of human faces, Journal of the Optical Society of America A, 4(3), 519-524. [7] A. Samal and P. A. Iyengar, 1992, Automatic recognition and analysis of human faces and facial expressions: A survey, Pattern Recognition, 25(1): 65-77. [8] Athinodoros S. Georghiades, David J. Kriegman and Peter N. Belhumeur, Illumination cones for recognition under variable lighting: Faces, Center for Computational Vision and Control. [9] Caifeng Shan, Shaogang Gong, Peter W. McOwan, 2009, Facial expression recognition based on Local Binary Patterns: A Comparative Study, Image and Vision Computing 27, 803-816. [10] A. Pentland and T. Choudhury, Feb. 2000, Face recognition for smart environments, Computer, Vol.33 Iss.2. [11] Essex face database -face94, University of Essex, UK, http://cswww.essex.ac.uk/mv/allfaces/index.html [12] Vidit Jain, Amitabha Mukherjee, 2002, The Indian Face Database, http://vis- www.cs.umass.edu/~vidit/IndianFaceDatabase/ [13] Yale Database, http://cvc.yale.edu/projects/yalefaces/yalefaces.html [14] FACE 1999, http://www.vision.caltech.edu/html- files/archive.html [15] ORL Database, http://www.cl.cam.ac.uk/research/dtg/attarchive/facedata base.html [16] AR database, http://rvl1.ecn.purdue.edu/~aleix/aleix_face_DB.html [17] UMIST Face Database, http://www.sheffield.ac.uk/eee/research/iel/research/face Dr. S. Ravi received his B.E degree in Computer Technology and Informatics from Institute of Road and Transport Technology (IRTT), Erode affiliated to Bharathiar University, Coimbatore, India in 1989, M.Tech degree in Computer and Information Technology from Centre for Information Technology and Engineering, Manonmaniam Sundaranar University, Tirunelveli, in 2004 and Ph.D degree in Computer and Information Technology from Centre for Information Technology and Engineering, Manonmaniam Sundaranar University, Tirunelveli, in 2010. He has worked as an Associate Professor in the Department of Computer Science, Cardamom Planters’ Association College, Bodinayakanur from 1990. He has worked as an Associate Professor in Centre for Information Technology and Engineering of Manonmaniam Sundaranar University, Tirunelveli from 2008. He is now working as Assistant Professor in the Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry from 2011 to till date. His research interests include Digital Image Processing, Face Detection and Face Recognition, Machine Vision and Pattern Recognition. He has published 47 research papers (22 - International Journals, 14 - IEEE International Conferences, 10 - International Conferences and 4 - National Conferences,). He is a member of the IEEE from 2006. Sadique Nayeem has received his Bachelor of Technology Degree in Computer Sc. & Engineering from Biju Patnaik University of Technology (BPUT), Rourkela, India in 2009. He is currently pursuing his Master of Technology degree in Computer Science and Engineering in Department of Computer Science from the School of Engineering and Technology, Pondicherry University, Puducherry, India. . His research interests include Face recognition, Genetic Algorithm and Holographic Memory. 20 30 40 50 60 70 80 90 100 1 2 3 4 5 6 7 8 9 10 11 IFD Face94 Yale Face 1999 Number of samples RecognitionAccuracy(%)