Ijarcet vol-2-issue-4-1352-1356

ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 4, April 2013
All Rights Reserved © 2013 IJARCET
1352

Abstract— Face alignment is an important issue in face
recognition systems. The performance of the face recognition
system depends on the accuracy of face alignment. Since face
alignment is usually conducted using eye positions, an accurate
eye localization algorithm is essential for accurate face
recognition. In many applications like eye-gaze tracking, iris
detection for security system, the eye detection is also
necessitated. In this paper, we propose a new eye detection
system which can accurately detect the center of both eyes in the
rotated face image. This system is composed of two portions
that are face region detection and finding symmetric axis in the
resulted face region to locate the position of eyes. Firstly the face
region is detected and extracted from the frontal view of face
image by using skin-color information in HSV. Symmetries are
good candidates for describing shapes. It is a powerful concept
that facilitates object detection in many situations. Thus we find
the symmetric axes of the extracted face region from gradient
orientation histogram of the extracted face region image by
using the Fourier method and then distinguish the eyes region.
Finally we find the symmetry axis of the eye to locate the center
of eyes using gradient orientation histogram.
Index Terms— eye detection, skin detection, symmetry
detection, localization of eye.
I. INTRODUCTION
Human face image analysis, detection and recognition
have become some of the most important research topics in
the field of important research topics in the field of computer
vision and pattern classification. The face recognition has
many applications such as personal identification, criminal
investigation, and security work and login authentication.
Automatic recognition of human faces by computer has been
approached in various ways. To develop the face recognition
system, the analytic approach can be used to recognize a face
using the geometrical measurements taken among facial
features, such as eyes and mouth. In the analytic approach,
reliable detection of facial features is fundamental to the
success of the overall system. Thus, eyes are one of the most
important facial features and eyes detection is a crucial aspect
in many useful applications.
The eye position detection is important not only for face
recognition but also for eye contour detection. The position
of other facial features can also be estimated using the eye
position to analyze the facial expression in the face. Although
Manuscript received April, 2013.
Nwe Ni San is with the Faculty of Information and Communication
Technology, University of Technology (Yatanarpon Cyber City), Pyin Oo
Lwin, Myanmar .
Dr. Nyein Aye was with the Head of Hardware Department, University of
Computer Studies Mandalay, Myanmar .
many eye detection methods have been developed in the last
decade, a lot of problems still exist also. Factor including
facial expression, face rotation in plane and depth, occlusion
and lighting conditions, and all undoubtedly affect the
performance of eye detection algorithms. In this paper, we
propose a new algorithm for eye detection system.
The paper is organized as follows. Related work is
reviewed in Section 2. In Section 3, the overall diagram of the
eye detection system is presented. The algorithm of eye
detection system is described in Section 4. The conclusion
and future works are mentioned in Section 5.
II. RELATED WORK
The existing work in eye position detection can be
classified into two categories: active infrared (IR) based
approaches and image-based passive approaches. Eye
detection based on active remote IR illumination is a simple
yet effective approach. But they all rely on an active IR light
source to produce the dark or bright pupil effects. In other
words, these methods can only be applied to the IR
illuminated eye images. It’s certain that these methods would
not be widely used, because in many real applications the
face images are not IR illuminated.
In this paper, we approach image-based passive method.
The commonly used approaches for passive eye detection
include the template matching method [1, 2], eigenspace [3]
method, and Hough transform-based method [4,5].
In the template matching method, segments of an input
image are compared to previously stored images, to evaluate
the similarity of the counterpart using correlation values. The
problem with simple template matching is that it cannot deal
with eye variations in scale, expression, rotation and
illumination. Use of multiscale templates was somewhat
helpful in solving the previous problem in template matching.
A method of using deformable templates is proposed by
Yuille et al [6]. This provides the advantage of finding some
extra features of an eye like its shape and size at the same
time. The weaknesses of the deformable templates are that
the processing time is lengthy and success relies on the initial
position of the template. Lam et al. [7] introduced the concept
of eye corners to improve the deformable template approach.
Saber et al. [8] and Jeng et al. [9] proposed to use facial
features geometrical structure to estimate the location of
eyes.
Kumar et al. [10] suggest a technique in which possible
eye areas are localized using a simple thresholding in color
space followed by a connected component analysis to
quantify spatially connected regions and further reduce the
search space to determine the contending eye pair windows.
Eye Detection System using Orientation
Histogram
Nwe Ni San, Nyein Aye
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 4, April 2013
1353
www.ijarcet.org
Finally the mean and variance projection functions are
utilized in each eye pair window to validate the presence of
the eye.
Pentland et al. [3] proposed an eigenspace method for eye
and face detection. If the training database is variable with
respect to appearance, orientation, and illumination, then this
method provides better performance than simple template
matching. But the performance of this method is closely
related to the training set used and this method also requires
normalized sets of training an d test images with respect to
size and orientation.
Another popular eye detection method is obtained by using
the Hough transform. This method is based on the shape
feature of and iris and is often used for binary valley or edge
maps [7, 11]. The drawback of this approach is that the
performance depends on threshold values used for binary
conversion of the valleys.
Various methods that have been adopted for eye detection
include wavelets, principal component analysis, fuzzy logic,
support vector machines, neural networks, evolutionary
computation and hidden Markov models. The method
proposed in this paper involves skin detection using skin
color information in HSV space and eye detection using
orientation histogram.
III. OVERVIEW OF PROPOSED SYSTEM
Automatic tracking of eyes and gaze direction is an
interesting topic in computer vision with its application in
biometric, security, intelligent human-computer interfaces,
and driver’s drowsiness detection system. Eye detection is
required in many applications. Localization of eyes is a
necessary step for many face classification methods and
further facilitates the detection of other facial landmarks.
Fig. 1 Overall Diagram of Eye Detection System
In this section, we present the proposed eye detection
system which consists of three portions: face region detection
and extraction, finding the symmetric axis in the detected
face region and defining the position of eyes using orientation
histogram.
Fig.1 shows the overall diagram of proposed eye detection
system. In our proposed system, the face region is firstly
extracted from the image by using skin-color information in
HSV space which is an efficient space for skin detection.
Secondly symmetric axis of the extracted face region is
searched to determine the eye in the face region by using the
orientation histogram. After finding the eye shape, its
symmetric axis is again searched to detect the center of eyes
in the face region. The following section presents the
algorithm of proposed system.
IV. ALGORITHM OF EYE DETECTION SYSTEM
An important issue in face recognition systems is face
alignment. Face alignment involves spatially scaling and
rotating a face image to match with face images in the
database. The face alignment has a large impact on
recognition accuracy and is usually performed with the use of
eye positions. In this section, the algorithms for eye detection
system are presented.
A. Skin and Face Detection
To detect the location of eyes, the first step we need to do is
the detection of the skin region that is very important in eye
detection. The skin region can help determining the
approximate eye position and eliminates a large number of
false eye candidates. The proposed skin detection algorithm
is described as follow.
Step1. Convert the input RGB image to HSV image.
Step2. Get the edge map image from RGB image using
Edge detection algorithms.
Step3.Get H & S values for each pixel and detect skin
region.
Step4. Find the different regions in the image by
implementing connectivity analysis using 8-connected
neighbourhood.
Step5. Find height and width for each region and
percentage of skin in each region.
Step6.For each region, if (height/width) or (width/height)
is within the range, then the region is a face.
In the first step, the RGB input image is converted to HSV
image. In the HSV space, H stands for hue component, which
describes the shade of the color, S stands for saturation
component, which describes how pure the hue (color) is
while V stands for value component, which describes the
brightness. The removal of V component takes care of
varying lighting conditions. H varies from 0 to 1 on a circular
scale i.e. the colors represented by H=0 and H=1 are the
same. S varies from 0 to 1, 1 representing 100 percent purity
of the color. H and S scales are partitioned into 100 levels and
the color histogram is formed using H and S.
In the second step, we need to get the edge information in
the image. There are many ways to perform edge detection.
However, the most may be grouped into two categories,
gradient and Laplacian. The gradient method detects the
edges by looking for the maximum and minimum in the first
derivative of the image. The Laplacian method searches for
zero crossings in the second derivative of the image to find
edges. Sobel, Prewitt and Roberts operators come under
gradient method while Marrs-Hildreth is a Laplacian method.
Among these, the Sobel operator is fast, detects edges at
finest scales and has smoothing along the edge direction,
Frontal
view
face
image
Face
Detection
using
Skin-color
information
Extracted
Face
Image Finding the
symmetric axis
of extracted face
region using
gradient
orientation
histogramFind the
position of
Eye using
Orientation
Histogram
Check
rotation
of face
Two
center
of
eyes
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 4, April 2013
All Rights Reserved © 2013 IJARCET
1354
which avoids noisy edges. The Sobel operator is used in edge
detection. The Sobel operator is based on convolving the
image with a small, separable, and integer valued filter in
horizontal and vertical direction and is therefore relatively
inexpensive in terms of computations. In simple terms, the
operator calculates the gradient of the image intensity at each
point, giving the direction of the largest possible increase
from light to dark and the rate of change in that direction.
In the third step, we get H and S value for each pixel to
detect the skin. In order to determine the skin, we need to
train the skin color, get color images containing human faces
and extract the skin regions in these images manually.
Training set contained more than 4,50,000 skin pixels to form
the color histogram in H and S. For each pixel, H and S
values are found and the bin corresponding to these H and S
values in the histogram is incremented by 1. After the training
is completed, the histogram is normalized. Though it appears
that there are two different regions very much apart with high
skin probability, they both belong to the same region, since H
is cyclic in nature. The skin color falls into a very small
region in the entire HS space. The height of a bin in the
histogram is proportional to the probability that the color
represented by the bin is a skin color. So use a threshold
between 0 and 1 to classify any pixel as a skin pixel or a
non-skin pixel. If the threshold value is high, all the non-skin
pixels will be classified correctly but some of the skin pixels
will be classified as non-skin pixels. If the threshold value is
low, all the skin pixels will be classified correctly whereas
some of the non-skin pixels will be classified as skin pixels.
This represents a trade-off between percentage of skin pixels
detected as skin and percentage of non-skin pixels falsely
detected as skin. There is an optimum threshold value with
which one can detect most of the skin pixels and reject most
of the non-skin pixels. This threshold is found by
experimentation. Given an image, each pixel in the image is
classified as skin or non-skin using color information. The
histogram is normalized and if the height of the bin
corresponding to the H and S values of a pixel exceeds a
threshold called skin-threshold, then that pixel is considered a
skin pixel. Otherwise the pixel is considered a non-skin pixel.
In fourth step, the detected skin image is used, one knows
whether a pixel is a skin pixel or not, but cannot say anything
about whether a pixel belongs to a face or not. One cannot say
anything about it at the pixel level. Need to go to a higher
level and so need to categorize the skin pixels into different
groups so that they will represent something meaningful as a
group, for example a face, a hand etc. Since have to form
meaningful groups of pixels, it makes sense to group pixels
that are connected to each other geometrically. Group the
skin pixels in the image based on a 8-connected
neighborhood i.e. if a skin pixel has got another skin pixel in
any of its 8 neighboring places, then both the pixels belong to
the same region.
In fifth and sixth steps, we need to classify different
regions as a human face or not. This is done by finding the
centroid, height and width of the region as well as the
percentage of skin in the rectangular area defined by the
above parameters. The centroid is found by the average of the
coordinates of all the pixels in that region. For finding
height:
 The y-coordinate of the centroid is subtracted
from the y-coordinates of all pixels in the region.
 Find the average of all the positive y-coordinates
and negative y-coordinates separately.
 Add the absolute values of both the averages and
multiply by 2. This gives the average height of the
region.
Average width can be found similarly by using
x-coordinates. For each region, if (height/width) or
(width/height) is within the range, then the region is a face.
Fig. 2 Skin and Face Detection
Fig. 2 shows the step-by-step processes result of above
skin and face detection algorithm. In Fig. 2, open face image
button is clicked to browse the input image file. The first four
steps in the above algorithm are finished by clicking the
detect edge and detect skin color buttons andtheir results in
these steps are exposed in this figure. The final two steps in
this algorithm are done by clicking the detect face region
button in figure and the result of the detected face region
image is also shown in last image of Fig. 2.
B. Symmetry Detection
For an image described by f(x,y), the gradient vector
[p,q]T at point (x,y) is defined by:
p = f(x,y)/x, q = f(x,y)/ y (1)
The orientation of this gradient vector is:
=arctan (q/p) (2)
The gradient is obtained by using an NN Sobel filtering
operation [12]. In our experiment, we will use N=5. It splits
the kernels into 21 sub-kernels and iteratively calculates the
responses. The result is two responses for the x- and y-
directions respectively. Then the gradient orientation at this
point of the image can be obtained using Eq. (2). The domain
of  is [-,). For detecting the orientation of a rotated face
image, half of the range of  will be enough. The negative
values of  are offset by  to become positive. If we have
some knowledge about the range of the rotated face angle, we
can reduce the domain of  even further.
Our algorithm is based on an observation about gradient
orientation distribution of the image. For a rotated face
image, there will be more points in the image whose gradient
orientations are perpendicular to the line. It is expected that
the statistical information of the gradient orientation of an
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 4, April 2013
1355
www.ijarcet.org
image can be used for rotated angle detection. Fig. 3 gives an
example shape of the gradient orientation histogram, in
which  is the rotated angle. The orientation histogram of this
gradient image can be obtained as described in Eq. (2). The
angle  in the equation is a continuous function. Quantization
is needed for the given range and resolution. The resolution
of the obtained histogram depends upon the range considered
and the number of points in the histogram. For instance, if we
would like to have 360 points within the angle between 45˚
and 135˚, then the angle resolution will be 0.25 degree. From
the obtained histogram h(), the orientation of the rotated
image is the difference between angle  where it gives the
maximum value for h() and /2. For an upright image,  will
be /2, hence the skew angle is zero.
Fig. 3 An illustration for skew detection of a rotated image using gradient
orientation histogram
After the angle of the rotated face has been obtained, the
input image can be rotated for skew correction. During the
rotation operation, bilinear interpolation of neighbouring
pixels will be involved to reduce noise effect. The rotation
can be realized using Eq. (3), where (cx,cy) is the center of
rotation, and rij, 1 i, j 2, are the elements of the rotation
matrix.
Xout
Yout
=
r11 r12
r21 r22
Xin
Yin
-
cx
cy
(3)
In this paper we use the Fourier method to obtain the
symmetry information. Let h(i), i = 0, 1, …, N-1, is the sum
of the discretized orientation histogram. The Fourier
transform H(u) of h(i) as defined in Eq. (4) will be used to
obtain the convolutions of two signals. The two signals are
the same, i.e. h(i).
H u = h i exp −j2πiu/N , u = 0,1, … , N − 1N−1
i=0 (4)
The convolution of histogram h(i) can be obtained by the
Fourier transform. Based on the following convolution
theorem:
f x *g x ⟺F u ×G(u (5)
the convolution in the space domain can be obtained by
taking the inverse Fourier transform of the product F(u) G(u).
In our case F(u) = G(u) = H(u). Thus, we have h(i) * h(i)
H2(u).
The accuracy of the orientation of the symmetry axis
obtained by searching the peak positions in the convolution
function can be improved by fitting a parabola function
around the local region of peak.
Fig. 4 shows the example detection result of symmetry axis
which will be acquired by applying the proposed symmetry
detection algorithm. The proposed symmetry detection
algorithm is described as follow:
Step 1: Perform a gradient operation on the detected face
region image;
Step 2: Obtain the gradient orientation histogram and
smoothing it with a median filter;
Step 3: Search for the convolution peaks of the gradient
orientation histogram by using Fourier transform to obtain
the orientations of symmetry axes;
Step 4: Search for maximum in this histogram to obtain an
initial rotated angle;
Step 5: Perform the symmetry check or evaluation about
the obtained symmetry axes;
Step 6: Obtain the position of this symmetry axis by using
the center of mass of the object;
Step 7: Draw the symmetry axes based on the orientation
and position information obtained;
Step 8: Refine the obtained initial rotated angle value by
locally fitting a cubic polynomial function, and calculate the
maximum analytically;
Step 9: Rotate the image for detecting the eye.
Fig. 4 Example result of symmetry detection
C. Eyes Center Detection
We can obtain the eye region according to the gradient
orientation histogram. Fig. 5 shows the example result of
eyes centre which will be got by applying the following
algorithm. The steps of the eyes extraction algorithm are
described as follow:
Step 1: Perform a gradient operation on the eye image;
Step 2: Obtain the gradient orientation histogram;
Step3: Search for the convolution peaks of the gradient
orientation histogram by using Fourier transform to obtain
the orientations of symmetry axes;
Step 4: Perform the symmetry check or evaluation about the
obtained symmetry axes;
Step 5: Draw the symmetry axes based on the orientation
and position information obtained;
Step 6: Obtain the eyes center
Fig.5 Example result of eyes center
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 4, April 2013
All Rights Reserved © 2013 IJARCET
1356
V. CONCLUSION AND FUTURE WORK
Face alignment is an important issue in face recognition
systems. The performance of the face recognition system
depends on the accuracy of face alignment. Since face
alignment is usually conducted using eye positions, an
accurate eye localization algorithm is essential for accurate
face recognition. In this paper, we proposed a new eye
detection algorithm that will precisely distinguish the eyes
region in the rotated face image by using the skew correction
method and orientation histogram. Our system will detect the
center of both eyes accurately and will support to detect the
center of both eyes in the scaled and additional invariant
image because of the use of orientation histogram. Our future
work is that we need to verify the system with various
conditional images.
ACKNOWLEDGMENT
My Sincere thanks to my supervisor Dr. Nyein Aye, for
providing me an opportunity to do my research work. I
express my thanks to my Institution namely University of
Technology (Yatanarpon Cyber City) for providing me with
a good environment and facilities like Internet, books,
computers and all that as my source to complete this research
work. My heart-felt thanks to my family, friends and
colleagues who have helped me for the completion of this
work.
REFERENCES
[1] R.Brunelli, T.Poggio, Face Recognition, Features versus templates,
IEEE Trans. Patt. Anal. Mach. Intell. 15(10)(1993)1042-1052.
[2] D.J.Beymer, Face Recognition under varying pose, IEEE Proceedings
of Int. Conference on Computer Vision and Pattern Recognition
(CVPR’94), Seattle, Washington, USA, 1994, pp.756-761.
[3] A.Pentland, B.Mognanddam, T.Starner, View based and modular
eigenspaces for face recognition, IEEE Proceedings of Int. Conference
on Computer Vision and Pattern Recognition (CVPR’94), Seattle,
Washington, USA, 1994, pp.756-761.
[4] Jiatao Song, Zheru Chi, Jilin Liu, A Robust Eye Detection Method
Using Combined Binary Edge and Intensity Information, Science
Direct, Pattern Recognition 39, (2006) 1110-1125.
[5] Digital Image Processing. R.C. Gonzalez and R.E. Woods, 2nd edition
PHI, Chapter 9, Morphological Image processing.
[6] A.L. Yuille, P.W.Hallinan, D.S.Cohen, Feature Extraction From Faces
Using Deformable Template, Int. J.Comput.Vision 8(2)(1992) 99-111.
[7] K.M. Lam, H. Yan, “Location and extracting the eye in human face
images”, Pattern Recognition, Vol.29, No. 5 (1996) 771-779.
[8] E. Saber and A.M. Tekalp, “Frontal-view face detection and facial
feature extraction using color, shape and symmetry based cost
functions”, Pattern Recognition Letters, Vol. 19, No. 8, pp.669-680,
1998.
[9] S.H.Jeng, H.Y.M. Liao, C.C. Han, M.Y. Chern, Y.T. Lui, “Facial
Feature Detection Using Geometrical Face Model: An Efficient
Approach”, Pattern Rcognition, Vol. 31, No. 3, March 1998,
pp.273-282.
[10] Kumar, Thilak R and Raja, Kumar S and Ramakrishnan, “Eye
detection using color cues and projection functions”, Proceedings 2002
International Conference on Image Processing ICIP, pages Vol.3
337-340, Rochester, New York, USA.
[11] G.Chow, X.Li, Towards a system of automatic facial feature detection,
Pattern Recognition (26)(1993)1739-1755.
[12] P. E. Danielsson and O. Seger.Generalized and separable Sobel
operators. In H. Freeman, editor, Machine Vision for
Three-Dimensional Scenes, pages 347-379. Academic Press, 1990.
Nwe Ni San has completed Master of Engineering (Information
Technology) (M.E-IT) from Mandalay Technology University (MTU).
Currently, she is a PhD candidate from University of Technology
(Yatanarpon Cyber City). She is working as a assistant- lecturer in
Technology University (Taunggyi) and her research areas are Digital Image
Processing.

Recommandé

International Journal of Engineering Research and Development par
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentIJERD Editor
244 vues12 diapositives
A novel approach for performance parameter estimation of face recognition bas... par
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...IJMER
479 vues5 diapositives
Face Detection Using Modified Viola Jones Algorithm par
Face Detection Using Modified Viola Jones AlgorithmFace Detection Using Modified Viola Jones Algorithm
Face Detection Using Modified Viola Jones Algorithmpaperpublications3
100 vues8 diapositives
Review of face detection systems based artificial neural networks algorithms par
Review of face detection systems based artificial neural networks algorithmsReview of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsijma
1.1K vues16 diapositives
Ck36515520 par
Ck36515520Ck36515520
Ck36515520IJERA Editor
367 vues6 diapositives
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S... par
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...IJERA Editor
267 vues6 diapositives

Contenu connexe

Tendances

Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap... par
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...ijtsrd
41 vues5 diapositives
20320130406016 par
2032013040601620320130406016
20320130406016IAEME Publication
578 vues8 diapositives
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS par
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSFACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSijcseit
12 vues7 diapositives
E045052731 par
E045052731E045052731
E045052731IJERA Editor
182 vues5 diapositives
Face Recognition par
Face RecognitionFace Recognition
Face RecognitionSaraj Sadanand
962 vues24 diapositives
Week6 face detection par
Week6 face detectionWeek6 face detection
Week6 face detectionHaitham El-Ghareeb
4.5K vues54 diapositives

Tendances(20)

Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap... par ijtsrd
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
ijtsrd41 vues
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS par ijcseit
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSFACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS
ijcseit12 vues
IRJET- An Effective System to Detect Face Drowsiness Status using Local F... par IRJET Journal
IRJET-  	  An Effective System to Detect Face Drowsiness Status using Local F...IRJET-  	  An Effective System to Detect Face Drowsiness Status using Local F...
IRJET- An Effective System to Detect Face Drowsiness Status using Local F...
IRJET Journal34 vues
Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boos... par Yen Ho
Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boos...Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boos...
Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boos...
Yen Ho1.7K vues
Face detection for video summary using enhancement based fusion strategy par eSAT Publishing House
Face detection for video summary using enhancement based fusion strategyFace detection for video summary using enhancement based fusion strategy
Face detection for video summary using enhancement based fusion strategy
Powerful processing to three-dimensional facial recognition using triple info... par IJAAS Team
Powerful processing to three-dimensional facial recognition using triple info...Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...
IJAAS Team9 vues
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method par IRJET Journal
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
IRJET Journal105 vues
A Review on Face Detection under Occlusion by Facial Accessories par IRJET Journal
A Review on Face Detection under Occlusion by Facial AccessoriesA Review on Face Detection under Occlusion by Facial Accessories
A Review on Face Detection under Occlusion by Facial Accessories
IRJET Journal36 vues
NEURAL NETWORK APPROACH FOR EYE DETECTION par cscpconf
NEURAL NETWORK APPROACH FOR EYE DETECTIONNEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTION
cscpconf69 vues
Explaining Aluminous Ascientification Of Significance Examples Of Personal St... par SubmissionResearchpa
Explaining Aluminous Ascientification Of Significance Examples Of Personal St...Explaining Aluminous Ascientification Of Significance Examples Of Personal St...
Explaining Aluminous Ascientification Of Significance Examples Of Personal St...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor... par Editor IJCATR
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Editor IJCATR189 vues
FACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTI... par csandit
FACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTI...FACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTI...
FACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTI...
csandit32 vues
A comparative review of various approaches for feature extraction in Face rec... par Vishnupriya T H
A comparative review of various approaches for feature extraction in Face rec...A comparative review of various approaches for feature extraction in Face rec...
A comparative review of various approaches for feature extraction in Face rec...
Vishnupriya T H1.3K vues

En vedette

Ijarcet vol-2-issue-4-1309-1313 par
Ijarcet vol-2-issue-4-1309-1313Ijarcet vol-2-issue-4-1309-1313
Ijarcet vol-2-issue-4-1309-1313Editor IJARCET
370 vues5 diapositives
Volume 2-issue-6-1987-1992 par
Volume 2-issue-6-1987-1992Volume 2-issue-6-1987-1992
Volume 2-issue-6-1987-1992Editor IJARCET
222 vues6 diapositives
Ijarcet vol-2-issue-4-1318-1321 par
Ijarcet vol-2-issue-4-1318-1321Ijarcet vol-2-issue-4-1318-1321
Ijarcet vol-2-issue-4-1318-1321Editor IJARCET
333 vues4 diapositives
Volume 2-issue-6-1965-1968 par
Volume 2-issue-6-1965-1968Volume 2-issue-6-1965-1968
Volume 2-issue-6-1965-1968Editor IJARCET
142 vues4 diapositives
1743 1748 par
1743 17481743 1748
1743 1748Editor IJARCET
253 vues6 diapositives
Volume 2-issue-6-2173-2176 par
Volume 2-issue-6-2173-2176Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176Editor IJARCET
345 vues4 diapositives

Similaire à Ijarcet vol-2-issue-4-1352-1356

Human_face_detection_and_recognition_using_contour.pdf par
Human_face_detection_and_recognition_using_contour.pdfHuman_face_detection_and_recognition_using_contour.pdf
Human_face_detection_and_recognition_using_contour.pdf1AT19CS099RuchithaBM
3 vues6 diapositives
IRJET- Survey of Iris Recognition Techniques par
IRJET- Survey of Iris Recognition TechniquesIRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition TechniquesIRJET Journal
53 vues5 diapositives
IRJET - Emotionalizer : Face Emotion Detection System par
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET Journal
192 vues6 diapositives
IRJET- Emotionalizer : Face Emotion Detection System par
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET Journal
507 vues6 diapositives
184 par
184184
184vivatechijri
16 vues5 diapositives
50120140504002 par
5012014050400250120140504002
50120140504002IAEME Publication
278 vues13 diapositives

Similaire à Ijarcet vol-2-issue-4-1352-1356(20)

IRJET- Survey of Iris Recognition Techniques par IRJET Journal
IRJET- Survey of Iris Recognition TechniquesIRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition Techniques
IRJET Journal53 vues
IRJET - Emotionalizer : Face Emotion Detection System par IRJET Journal
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection System
IRJET Journal192 vues
IRJET- Emotionalizer : Face Emotion Detection System par IRJET Journal
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection System
IRJET Journal507 vues
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS par ijma
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSREVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
ijma38 vues
A study of techniques for facial detection and expression classification par IJCSES Journal
A study of techniques for facial detection and expression classificationA study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classification
IJCSES Journal1.2K vues
IRJET- A Review on Various Approaches of Face Recognition par IRJET Journal
IRJET- A Review on Various Approaches of Face RecognitionIRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face Recognition
IRJET Journal22 vues
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING par ijiert bestjournal
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSINGAN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
Facial expression identification by using features of salient facial landmarks par eSAT Journals
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarks
eSAT Journals165 vues
Facial expression identification by using features of salient facial landmarks par eSAT Journals
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarks
eSAT Journals68 vues
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK par ijiert bestjournal
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORKHUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS par IAEME Publication
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSISFACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
Multi Local Feature Selection Using Genetic Algorithm For Face Identification par CSCJournals
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationMulti Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
CSCJournals232 vues
Local Descriptor based Face Recognition System par IRJET Journal
Local Descriptor based Face Recognition SystemLocal Descriptor based Face Recognition System
Local Descriptor based Face Recognition System
IRJET Journal32 vues
IRJET - Facial Recognition based Attendance System with LBPH par IRJET Journal
IRJET -  	  Facial Recognition based Attendance System with LBPHIRJET -  	  Facial Recognition based Attendance System with LBPH
IRJET - Facial Recognition based Attendance System with LBPH
IRJET Journal24 vues
A SUMMARY OF LITERATURE REVIEW FACE RECOGNITION par Amy Roman
A SUMMARY OF LITERATURE REVIEW  FACE RECOGNITIONA SUMMARY OF LITERATURE REVIEW  FACE RECOGNITION
A SUMMARY OF LITERATURE REVIEW FACE RECOGNITION
Amy Roman5 vues

Plus de Editor IJARCET

Electrically small antennas: The art of miniaturization par
Electrically small antennas: The art of miniaturizationElectrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturizationEditor IJARCET
879 vues6 diapositives
Volume 2-issue-6-2205-2207 par
Volume 2-issue-6-2205-2207Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207Editor IJARCET
632 vues3 diapositives
Volume 2-issue-6-2195-2199 par
Volume 2-issue-6-2195-2199Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199Editor IJARCET
595 vues5 diapositives
Volume 2-issue-6-2200-2204 par
Volume 2-issue-6-2200-2204Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Editor IJARCET
655 vues5 diapositives
Volume 2-issue-6-2190-2194 par
Volume 2-issue-6-2190-2194Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194Editor IJARCET
629 vues5 diapositives
Volume 2-issue-6-2186-2189 par
Volume 2-issue-6-2186-2189Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189Editor IJARCET
567 vues4 diapositives

Plus de Editor IJARCET(20)

Electrically small antennas: The art of miniaturization par Editor IJARCET
Electrically small antennas: The art of miniaturizationElectrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturization
Editor IJARCET879 vues

Dernier

Data-centric AI and the convergence of data and model engineering: opportunit... par
Data-centric AI and the convergence of data and model engineering:opportunit...Data-centric AI and the convergence of data and model engineering:opportunit...
Data-centric AI and the convergence of data and model engineering: opportunit...Paolo Missier
29 vues40 diapositives
20231123_Camunda Meetup Vienna.pdf par
20231123_Camunda Meetup Vienna.pdf20231123_Camunda Meetup Vienna.pdf
20231123_Camunda Meetup Vienna.pdfPhactum Softwareentwicklung GmbH
23 vues73 diapositives
Future of Learning - Yap Aye Wee.pdf par
Future of Learning - Yap Aye Wee.pdfFuture of Learning - Yap Aye Wee.pdf
Future of Learning - Yap Aye Wee.pdfNUS-ISS
38 vues11 diapositives
PharoJS - Zürich Smalltalk Group Meetup November 2023 par
PharoJS - Zürich Smalltalk Group Meetup November 2023PharoJS - Zürich Smalltalk Group Meetup November 2023
PharoJS - Zürich Smalltalk Group Meetup November 2023Noury Bouraqadi
113 vues17 diapositives
Combining Orchestration and Choreography for a Clean Architecture par
Combining Orchestration and Choreography for a Clean ArchitectureCombining Orchestration and Choreography for a Clean Architecture
Combining Orchestration and Choreography for a Clean ArchitectureThomasHeinrichs1
68 vues24 diapositives
The Importance of Cybersecurity for Digital Transformation par
The Importance of Cybersecurity for Digital TransformationThe Importance of Cybersecurity for Digital Transformation
The Importance of Cybersecurity for Digital TransformationNUS-ISS
25 vues26 diapositives

Dernier(20)

Data-centric AI and the convergence of data and model engineering: opportunit... par Paolo Missier
Data-centric AI and the convergence of data and model engineering:opportunit...Data-centric AI and the convergence of data and model engineering:opportunit...
Data-centric AI and the convergence of data and model engineering: opportunit...
Paolo Missier29 vues
Future of Learning - Yap Aye Wee.pdf par NUS-ISS
Future of Learning - Yap Aye Wee.pdfFuture of Learning - Yap Aye Wee.pdf
Future of Learning - Yap Aye Wee.pdf
NUS-ISS38 vues
PharoJS - Zürich Smalltalk Group Meetup November 2023 par Noury Bouraqadi
PharoJS - Zürich Smalltalk Group Meetup November 2023PharoJS - Zürich Smalltalk Group Meetup November 2023
PharoJS - Zürich Smalltalk Group Meetup November 2023
Noury Bouraqadi113 vues
Combining Orchestration and Choreography for a Clean Architecture par ThomasHeinrichs1
Combining Orchestration and Choreography for a Clean ArchitectureCombining Orchestration and Choreography for a Clean Architecture
Combining Orchestration and Choreography for a Clean Architecture
The Importance of Cybersecurity for Digital Transformation par NUS-ISS
The Importance of Cybersecurity for Digital TransformationThe Importance of Cybersecurity for Digital Transformation
The Importance of Cybersecurity for Digital Transformation
NUS-ISS25 vues
Transcript: The Details of Description Techniques tips and tangents on altern... par BookNet Canada
Transcript: The Details of Description Techniques tips and tangents on altern...Transcript: The Details of Description Techniques tips and tangents on altern...
Transcript: The Details of Description Techniques tips and tangents on altern...
BookNet Canada119 vues
Architecting CX Measurement Frameworks and Ensuring CX Metrics are fit for Pu... par NUS-ISS
Architecting CX Measurement Frameworks and Ensuring CX Metrics are fit for Pu...Architecting CX Measurement Frameworks and Ensuring CX Metrics are fit for Pu...
Architecting CX Measurement Frameworks and Ensuring CX Metrics are fit for Pu...
NUS-ISS32 vues
Understanding GenAI/LLM and What is Google Offering - Felix Goh par NUS-ISS
Understanding GenAI/LLM and What is Google Offering - Felix GohUnderstanding GenAI/LLM and What is Google Offering - Felix Goh
Understanding GenAI/LLM and What is Google Offering - Felix Goh
NUS-ISS39 vues
How to reduce cold starts for Java Serverless applications in AWS at JCON Wor... par Vadym Kazulkin
How to reduce cold starts for Java Serverless applications in AWS at JCON Wor...How to reduce cold starts for Java Serverless applications in AWS at JCON Wor...
How to reduce cold starts for Java Serverless applications in AWS at JCON Wor...
Vadym Kazulkin70 vues
Black and White Modern Science Presentation.pptx par maryamkhalid2916
Black and White Modern Science Presentation.pptxBlack and White Modern Science Presentation.pptx
Black and White Modern Science Presentation.pptx
The details of description: Techniques, tips, and tangents on alternative tex... par BookNet Canada
The details of description: Techniques, tips, and tangents on alternative tex...The details of description: Techniques, tips, and tangents on alternative tex...
The details of description: Techniques, tips, and tangents on alternative tex...
BookNet Canada110 vues
Empathic Computing: Delivering the Potential of the Metaverse par Mark Billinghurst
Empathic Computing: Delivering  the Potential of the MetaverseEmpathic Computing: Delivering  the Potential of the Metaverse
Empathic Computing: Delivering the Potential of the Metaverse
AI: mind, matter, meaning, metaphors, being, becoming, life values par Twain Liu 刘秋艳
AI: mind, matter, meaning, metaphors, being, becoming, life valuesAI: mind, matter, meaning, metaphors, being, becoming, life values
AI: mind, matter, meaning, metaphors, being, becoming, life values
Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum... par NUS-ISS
Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum...Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum...
Beyond the Hype: What Generative AI Means for the Future of Work - Damien Cum...
NUS-ISS28 vues
Business Analyst Series 2023 - Week 3 Session 5 par DianaGray10
Business Analyst Series 2023 -  Week 3 Session 5Business Analyst Series 2023 -  Week 3 Session 5
Business Analyst Series 2023 - Week 3 Session 5
DianaGray10165 vues

Ijarcet vol-2-issue-4-1352-1356

  • 1. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 4, April 2013 All Rights Reserved © 2013 IJARCET 1352  Abstract— Face alignment is an important issue in face recognition systems. The performance of the face recognition system depends on the accuracy of face alignment. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is essential for accurate face recognition. In many applications like eye-gaze tracking, iris detection for security system, the eye detection is also necessitated. In this paper, we propose a new eye detection system which can accurately detect the center of both eyes in the rotated face image. This system is composed of two portions that are face region detection and finding symmetric axis in the resulted face region to locate the position of eyes. Firstly the face region is detected and extracted from the frontal view of face image by using skin-color information in HSV. Symmetries are good candidates for describing shapes. It is a powerful concept that facilitates object detection in many situations. Thus we find the symmetric axes of the extracted face region from gradient orientation histogram of the extracted face region image by using the Fourier method and then distinguish the eyes region. Finally we find the symmetry axis of the eye to locate the center of eyes using gradient orientation histogram. Index Terms— eye detection, skin detection, symmetry detection, localization of eye. I. INTRODUCTION Human face image analysis, detection and recognition have become some of the most important research topics in the field of important research topics in the field of computer vision and pattern classification. The face recognition has many applications such as personal identification, criminal investigation, and security work and login authentication. Automatic recognition of human faces by computer has been approached in various ways. To develop the face recognition system, the analytic approach can be used to recognize a face using the geometrical measurements taken among facial features, such as eyes and mouth. In the analytic approach, reliable detection of facial features is fundamental to the success of the overall system. Thus, eyes are one of the most important facial features and eyes detection is a crucial aspect in many useful applications. The eye position detection is important not only for face recognition but also for eye contour detection. The position of other facial features can also be estimated using the eye position to analyze the facial expression in the face. Although Manuscript received April, 2013. Nwe Ni San is with the Faculty of Information and Communication Technology, University of Technology (Yatanarpon Cyber City), Pyin Oo Lwin, Myanmar . Dr. Nyein Aye was with the Head of Hardware Department, University of Computer Studies Mandalay, Myanmar . many eye detection methods have been developed in the last decade, a lot of problems still exist also. Factor including facial expression, face rotation in plane and depth, occlusion and lighting conditions, and all undoubtedly affect the performance of eye detection algorithms. In this paper, we propose a new algorithm for eye detection system. The paper is organized as follows. Related work is reviewed in Section 2. In Section 3, the overall diagram of the eye detection system is presented. The algorithm of eye detection system is described in Section 4. The conclusion and future works are mentioned in Section 5. II. RELATED WORK The existing work in eye position detection can be classified into two categories: active infrared (IR) based approaches and image-based passive approaches. Eye detection based on active remote IR illumination is a simple yet effective approach. But they all rely on an active IR light source to produce the dark or bright pupil effects. In other words, these methods can only be applied to the IR illuminated eye images. It’s certain that these methods would not be widely used, because in many real applications the face images are not IR illuminated. In this paper, we approach image-based passive method. The commonly used approaches for passive eye detection include the template matching method [1, 2], eigenspace [3] method, and Hough transform-based method [4,5]. In the template matching method, segments of an input image are compared to previously stored images, to evaluate the similarity of the counterpart using correlation values. The problem with simple template matching is that it cannot deal with eye variations in scale, expression, rotation and illumination. Use of multiscale templates was somewhat helpful in solving the previous problem in template matching. A method of using deformable templates is proposed by Yuille et al [6]. This provides the advantage of finding some extra features of an eye like its shape and size at the same time. The weaknesses of the deformable templates are that the processing time is lengthy and success relies on the initial position of the template. Lam et al. [7] introduced the concept of eye corners to improve the deformable template approach. Saber et al. [8] and Jeng et al. [9] proposed to use facial features geometrical structure to estimate the location of eyes. Kumar et al. [10] suggest a technique in which possible eye areas are localized using a simple thresholding in color space followed by a connected component analysis to quantify spatially connected regions and further reduce the search space to determine the contending eye pair windows. Eye Detection System using Orientation Histogram Nwe Ni San, Nyein Aye
  • 2. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 4, April 2013 1353 www.ijarcet.org Finally the mean and variance projection functions are utilized in each eye pair window to validate the presence of the eye. Pentland et al. [3] proposed an eigenspace method for eye and face detection. If the training database is variable with respect to appearance, orientation, and illumination, then this method provides better performance than simple template matching. But the performance of this method is closely related to the training set used and this method also requires normalized sets of training an d test images with respect to size and orientation. Another popular eye detection method is obtained by using the Hough transform. This method is based on the shape feature of and iris and is often used for binary valley or edge maps [7, 11]. The drawback of this approach is that the performance depends on threshold values used for binary conversion of the valleys. Various methods that have been adopted for eye detection include wavelets, principal component analysis, fuzzy logic, support vector machines, neural networks, evolutionary computation and hidden Markov models. The method proposed in this paper involves skin detection using skin color information in HSV space and eye detection using orientation histogram. III. OVERVIEW OF PROPOSED SYSTEM Automatic tracking of eyes and gaze direction is an interesting topic in computer vision with its application in biometric, security, intelligent human-computer interfaces, and driver’s drowsiness detection system. Eye detection is required in many applications. Localization of eyes is a necessary step for many face classification methods and further facilitates the detection of other facial landmarks. Fig. 1 Overall Diagram of Eye Detection System In this section, we present the proposed eye detection system which consists of three portions: face region detection and extraction, finding the symmetric axis in the detected face region and defining the position of eyes using orientation histogram. Fig.1 shows the overall diagram of proposed eye detection system. In our proposed system, the face region is firstly extracted from the image by using skin-color information in HSV space which is an efficient space for skin detection. Secondly symmetric axis of the extracted face region is searched to determine the eye in the face region by using the orientation histogram. After finding the eye shape, its symmetric axis is again searched to detect the center of eyes in the face region. The following section presents the algorithm of proposed system. IV. ALGORITHM OF EYE DETECTION SYSTEM An important issue in face recognition systems is face alignment. Face alignment involves spatially scaling and rotating a face image to match with face images in the database. The face alignment has a large impact on recognition accuracy and is usually performed with the use of eye positions. In this section, the algorithms for eye detection system are presented. A. Skin and Face Detection To detect the location of eyes, the first step we need to do is the detection of the skin region that is very important in eye detection. The skin region can help determining the approximate eye position and eliminates a large number of false eye candidates. The proposed skin detection algorithm is described as follow. Step1. Convert the input RGB image to HSV image. Step2. Get the edge map image from RGB image using Edge detection algorithms. Step3.Get H & S values for each pixel and detect skin region. Step4. Find the different regions in the image by implementing connectivity analysis using 8-connected neighbourhood. Step5. Find height and width for each region and percentage of skin in each region. Step6.For each region, if (height/width) or (width/height) is within the range, then the region is a face. In the first step, the RGB input image is converted to HSV image. In the HSV space, H stands for hue component, which describes the shade of the color, S stands for saturation component, which describes how pure the hue (color) is while V stands for value component, which describes the brightness. The removal of V component takes care of varying lighting conditions. H varies from 0 to 1 on a circular scale i.e. the colors represented by H=0 and H=1 are the same. S varies from 0 to 1, 1 representing 100 percent purity of the color. H and S scales are partitioned into 100 levels and the color histogram is formed using H and S. In the second step, we need to get the edge information in the image. There are many ways to perform edge detection. However, the most may be grouped into two categories, gradient and Laplacian. The gradient method detects the edges by looking for the maximum and minimum in the first derivative of the image. The Laplacian method searches for zero crossings in the second derivative of the image to find edges. Sobel, Prewitt and Roberts operators come under gradient method while Marrs-Hildreth is a Laplacian method. Among these, the Sobel operator is fast, detects edges at finest scales and has smoothing along the edge direction, Frontal view face image Face Detection using Skin-color information Extracted Face Image Finding the symmetric axis of extracted face region using gradient orientation histogramFind the position of Eye using Orientation Histogram Check rotation of face Two center of eyes
  • 3. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 4, April 2013 All Rights Reserved © 2013 IJARCET 1354 which avoids noisy edges. The Sobel operator is used in edge detection. The Sobel operator is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. In simple terms, the operator calculates the gradient of the image intensity at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. In the third step, we get H and S value for each pixel to detect the skin. In order to determine the skin, we need to train the skin color, get color images containing human faces and extract the skin regions in these images manually. Training set contained more than 4,50,000 skin pixels to form the color histogram in H and S. For each pixel, H and S values are found and the bin corresponding to these H and S values in the histogram is incremented by 1. After the training is completed, the histogram is normalized. Though it appears that there are two different regions very much apart with high skin probability, they both belong to the same region, since H is cyclic in nature. The skin color falls into a very small region in the entire HS space. The height of a bin in the histogram is proportional to the probability that the color represented by the bin is a skin color. So use a threshold between 0 and 1 to classify any pixel as a skin pixel or a non-skin pixel. If the threshold value is high, all the non-skin pixels will be classified correctly but some of the skin pixels will be classified as non-skin pixels. If the threshold value is low, all the skin pixels will be classified correctly whereas some of the non-skin pixels will be classified as skin pixels. This represents a trade-off between percentage of skin pixels detected as skin and percentage of non-skin pixels falsely detected as skin. There is an optimum threshold value with which one can detect most of the skin pixels and reject most of the non-skin pixels. This threshold is found by experimentation. Given an image, each pixel in the image is classified as skin or non-skin using color information. The histogram is normalized and if the height of the bin corresponding to the H and S values of a pixel exceeds a threshold called skin-threshold, then that pixel is considered a skin pixel. Otherwise the pixel is considered a non-skin pixel. In fourth step, the detected skin image is used, one knows whether a pixel is a skin pixel or not, but cannot say anything about whether a pixel belongs to a face or not. One cannot say anything about it at the pixel level. Need to go to a higher level and so need to categorize the skin pixels into different groups so that they will represent something meaningful as a group, for example a face, a hand etc. Since have to form meaningful groups of pixels, it makes sense to group pixels that are connected to each other geometrically. Group the skin pixels in the image based on a 8-connected neighborhood i.e. if a skin pixel has got another skin pixel in any of its 8 neighboring places, then both the pixels belong to the same region. In fifth and sixth steps, we need to classify different regions as a human face or not. This is done by finding the centroid, height and width of the region as well as the percentage of skin in the rectangular area defined by the above parameters. The centroid is found by the average of the coordinates of all the pixels in that region. For finding height:  The y-coordinate of the centroid is subtracted from the y-coordinates of all pixels in the region.  Find the average of all the positive y-coordinates and negative y-coordinates separately.  Add the absolute values of both the averages and multiply by 2. This gives the average height of the region. Average width can be found similarly by using x-coordinates. For each region, if (height/width) or (width/height) is within the range, then the region is a face. Fig. 2 Skin and Face Detection Fig. 2 shows the step-by-step processes result of above skin and face detection algorithm. In Fig. 2, open face image button is clicked to browse the input image file. The first four steps in the above algorithm are finished by clicking the detect edge and detect skin color buttons andtheir results in these steps are exposed in this figure. The final two steps in this algorithm are done by clicking the detect face region button in figure and the result of the detected face region image is also shown in last image of Fig. 2. B. Symmetry Detection For an image described by f(x,y), the gradient vector [p,q]T at point (x,y) is defined by: p = f(x,y)/x, q = f(x,y)/ y (1) The orientation of this gradient vector is: =arctan (q/p) (2) The gradient is obtained by using an NN Sobel filtering operation [12]. In our experiment, we will use N=5. It splits the kernels into 21 sub-kernels and iteratively calculates the responses. The result is two responses for the x- and y- directions respectively. Then the gradient orientation at this point of the image can be obtained using Eq. (2). The domain of  is [-,). For detecting the orientation of a rotated face image, half of the range of  will be enough. The negative values of  are offset by  to become positive. If we have some knowledge about the range of the rotated face angle, we can reduce the domain of  even further. Our algorithm is based on an observation about gradient orientation distribution of the image. For a rotated face image, there will be more points in the image whose gradient orientations are perpendicular to the line. It is expected that the statistical information of the gradient orientation of an
  • 4. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 4, April 2013 1355 www.ijarcet.org image can be used for rotated angle detection. Fig. 3 gives an example shape of the gradient orientation histogram, in which  is the rotated angle. The orientation histogram of this gradient image can be obtained as described in Eq. (2). The angle  in the equation is a continuous function. Quantization is needed for the given range and resolution. The resolution of the obtained histogram depends upon the range considered and the number of points in the histogram. For instance, if we would like to have 360 points within the angle between 45˚ and 135˚, then the angle resolution will be 0.25 degree. From the obtained histogram h(), the orientation of the rotated image is the difference between angle  where it gives the maximum value for h() and /2. For an upright image,  will be /2, hence the skew angle is zero. Fig. 3 An illustration for skew detection of a rotated image using gradient orientation histogram After the angle of the rotated face has been obtained, the input image can be rotated for skew correction. During the rotation operation, bilinear interpolation of neighbouring pixels will be involved to reduce noise effect. The rotation can be realized using Eq. (3), where (cx,cy) is the center of rotation, and rij, 1 i, j 2, are the elements of the rotation matrix. Xout Yout = r11 r12 r21 r22 Xin Yin - cx cy (3) In this paper we use the Fourier method to obtain the symmetry information. Let h(i), i = 0, 1, …, N-1, is the sum of the discretized orientation histogram. The Fourier transform H(u) of h(i) as defined in Eq. (4) will be used to obtain the convolutions of two signals. The two signals are the same, i.e. h(i). H u = h i exp −j2πiu/N , u = 0,1, … , N − 1N−1 i=0 (4) The convolution of histogram h(i) can be obtained by the Fourier transform. Based on the following convolution theorem: f x *g x ⟺F u ×G(u (5) the convolution in the space domain can be obtained by taking the inverse Fourier transform of the product F(u) G(u). In our case F(u) = G(u) = H(u). Thus, we have h(i) * h(i) H2(u). The accuracy of the orientation of the symmetry axis obtained by searching the peak positions in the convolution function can be improved by fitting a parabola function around the local region of peak. Fig. 4 shows the example detection result of symmetry axis which will be acquired by applying the proposed symmetry detection algorithm. The proposed symmetry detection algorithm is described as follow: Step 1: Perform a gradient operation on the detected face region image; Step 2: Obtain the gradient orientation histogram and smoothing it with a median filter; Step 3: Search for the convolution peaks of the gradient orientation histogram by using Fourier transform to obtain the orientations of symmetry axes; Step 4: Search for maximum in this histogram to obtain an initial rotated angle; Step 5: Perform the symmetry check or evaluation about the obtained symmetry axes; Step 6: Obtain the position of this symmetry axis by using the center of mass of the object; Step 7: Draw the symmetry axes based on the orientation and position information obtained; Step 8: Refine the obtained initial rotated angle value by locally fitting a cubic polynomial function, and calculate the maximum analytically; Step 9: Rotate the image for detecting the eye. Fig. 4 Example result of symmetry detection C. Eyes Center Detection We can obtain the eye region according to the gradient orientation histogram. Fig. 5 shows the example result of eyes centre which will be got by applying the following algorithm. The steps of the eyes extraction algorithm are described as follow: Step 1: Perform a gradient operation on the eye image; Step 2: Obtain the gradient orientation histogram; Step3: Search for the convolution peaks of the gradient orientation histogram by using Fourier transform to obtain the orientations of symmetry axes; Step 4: Perform the symmetry check or evaluation about the obtained symmetry axes; Step 5: Draw the symmetry axes based on the orientation and position information obtained; Step 6: Obtain the eyes center Fig.5 Example result of eyes center
  • 5. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 4, April 2013 All Rights Reserved © 2013 IJARCET 1356 V. CONCLUSION AND FUTURE WORK Face alignment is an important issue in face recognition systems. The performance of the face recognition system depends on the accuracy of face alignment. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is essential for accurate face recognition. In this paper, we proposed a new eye detection algorithm that will precisely distinguish the eyes region in the rotated face image by using the skew correction method and orientation histogram. Our system will detect the center of both eyes accurately and will support to detect the center of both eyes in the scaled and additional invariant image because of the use of orientation histogram. Our future work is that we need to verify the system with various conditional images. ACKNOWLEDGMENT My Sincere thanks to my supervisor Dr. Nyein Aye, for providing me an opportunity to do my research work. I express my thanks to my Institution namely University of Technology (Yatanarpon Cyber City) for providing me with a good environment and facilities like Internet, books, computers and all that as my source to complete this research work. My heart-felt thanks to my family, friends and colleagues who have helped me for the completion of this work. REFERENCES [1] R.Brunelli, T.Poggio, Face Recognition, Features versus templates, IEEE Trans. Patt. Anal. Mach. Intell. 15(10)(1993)1042-1052. [2] D.J.Beymer, Face Recognition under varying pose, IEEE Proceedings of Int. Conference on Computer Vision and Pattern Recognition (CVPR’94), Seattle, Washington, USA, 1994, pp.756-761. [3] A.Pentland, B.Mognanddam, T.Starner, View based and modular eigenspaces for face recognition, IEEE Proceedings of Int. Conference on Computer Vision and Pattern Recognition (CVPR’94), Seattle, Washington, USA, 1994, pp.756-761. [4] Jiatao Song, Zheru Chi, Jilin Liu, A Robust Eye Detection Method Using Combined Binary Edge and Intensity Information, Science Direct, Pattern Recognition 39, (2006) 1110-1125. [5] Digital Image Processing. R.C. Gonzalez and R.E. Woods, 2nd edition PHI, Chapter 9, Morphological Image processing. [6] A.L. Yuille, P.W.Hallinan, D.S.Cohen, Feature Extraction From Faces Using Deformable Template, Int. J.Comput.Vision 8(2)(1992) 99-111. [7] K.M. Lam, H. Yan, “Location and extracting the eye in human face images”, Pattern Recognition, Vol.29, No. 5 (1996) 771-779. [8] E. Saber and A.M. Tekalp, “Frontal-view face detection and facial feature extraction using color, shape and symmetry based cost functions”, Pattern Recognition Letters, Vol. 19, No. 8, pp.669-680, 1998. [9] S.H.Jeng, H.Y.M. Liao, C.C. Han, M.Y. Chern, Y.T. Lui, “Facial Feature Detection Using Geometrical Face Model: An Efficient Approach”, Pattern Rcognition, Vol. 31, No. 3, March 1998, pp.273-282. [10] Kumar, Thilak R and Raja, Kumar S and Ramakrishnan, “Eye detection using color cues and projection functions”, Proceedings 2002 International Conference on Image Processing ICIP, pages Vol.3 337-340, Rochester, New York, USA. [11] G.Chow, X.Li, Towards a system of automatic facial feature detection, Pattern Recognition (26)(1993)1739-1755. [12] P. E. Danielsson and O. Seger.Generalized and separable Sobel operators. In H. Freeman, editor, Machine Vision for Three-Dimensional Scenes, pages 347-379. Academic Press, 1990. Nwe Ni San has completed Master of Engineering (Information Technology) (M.E-IT) from Mandalay Technology University (MTU). Currently, she is a PhD candidate from University of Technology (Yatanarpon Cyber City). She is working as a assistant- lecturer in Technology University (Taunggyi) and her research areas are Digital Image Processing.