More Related Content
Similar to 20320130406016
Similar to 20320130406016 (20)
More from IAEME Publication
More from IAEME Publication (20)
20320130406016
- 1. International Journal of Advanced Research in Engineering RESEARCH IN ENGINEERING
INTERNATIONAL JOURNAL OF ADVANCED and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 4, Issue 7, November – December (2013), © IAEME
AND TECHNOLOGY (IJARET)
ISSN 0976 - 6480 (Print)
ISSN 0976 - 6499 (Online)
Volume 4, Issue 7, November - December 2013, pp. 139-146
© IAEME: www.iaeme.com/ijaret.asp
Journal Impact Factor (2013): 5.8376 (Calculated by GISI)
www.jifactor.com
IJARET
©IAEME
EYES DETECTION USING MORPHOLOGICAL IMAGE PROCESSING
THROUGH MATLAB
Mahendra Pratap Singh1 and Dr. Anil Kumar Sharma2
1
M. Tech. Scholar, Department of Electronic Instrumentation & Control Engineering
2
Professor & Principal, Department of Electronics & Communication Engineering
Institute of Engineering & Technology, Alwar-301030 (Raj.), India
ABSTRACT
Now a day’s computerized face recognition plays a vital role in criminal identification,
security and surveillance systems, human computer interfacing, and model-based video coding.
There are many techniques available globally for such computerized face recognition based on
different methods. Whatever the technique is used for face recognition; basically all of them follows
these four steps. In first step, the face image is enhanced and segmented. In second, the face
boundary and facial features are detected. In third, the extracted features are matched against the
features in the database. In fourth, the classification into one or more persons is achieved. A lot of
research work is going on in the recent decade on face recognition and facial feature based human
computer interaction. Facial features extraction is one of the innovative and challenging tasks in the
field of human computer interaction. There are many features on the human face which can be used
as various detection techniques in human computer interaction but among other various features
present on the face the “eyes” are the most important one because of its versatility of appearance and
expression variety. Although various eye detection schemes are available in the literature, the
proposed method is unique with its own features. In this thesis we are detecting the facial feature
“eyes” using Morphological process using MATLAB. In this process the eyes on the face are
detected in six stage. In first stage the face detected. In second stage the extraction of facial features
is carried out. In third stage the edge of image obtained in second stage is highlighted using an edge
detector. In fourth step the morphological process of image is done in which the minor details of size
less than 30 megapixels is removed. In fifth stage the algorithm detects the available pairs on the face
so that both eye and eyebrows on the face are located. In the final stage the algorithm divides the
face in two parts i.e. lower and upper face. Now in upper face when we move from bottom to top the
first this we come across are the eyes. In this way the algorithm complete its process of eye detection
on the face. In this work we have used the Japanese Female Facial Expressions, JAFFE (Lyons,
139
- 2. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 4, Issue 7, November – December (2013), © IAEME
Akamatsu, Kamachi, & Gyoba, 1998) data set contains 213 photos of 10 Japanese female models
posing expressions of happiness, sadness, fear, anger, surprise, disgust, and neutrality. The image in
the database are manually cropped to remove background. The images used for implementation of
work are of size 125X150 pixels. All the images are tested for eye detection one by one using the
algorithm of six stage as mentioned above and the simulated result obtained are analyzed. we have
found overall successful and the average output of correct data face is 83%.
Keywords: Dilation, Erosion, JAFFE, Morphology, Prewitt edge detector.
1. INTRODUCTION
Automatic extraction of human head and face boundaries and facial features is critical in the
areas of face recognition, criminal identification, security and surveillance systems, human computer
interfacing, and model-based video coding [1-3]. In fact, given an input image depicting one or more
human subjects, the problem of evaluating their identity boils down to detecting their faces,
extracting the relevant information needed for their description, and finally devising a matching
algorithm to compare different descriptions. In general, the computerized face recognition includes
four steps [4]. First, the face image is enhanced and segmented. Second, the face boundary and facial
features are detected. Third, the extracted features are matched against the features in the database.
Fourth, the classification into one or more persons is achieved. Face detection and facial feature
detection is a process of locating a human face in an image. It is a challenging task due to the
variations in scale, orientation, pose, facial expressions, partial occlusions and lighting conditions.
Face detection is an important step of automatic face recognition and Facial expression recognition
[4-5]. Face detection is not straight forward because it has lots of variations of image appearance,
such as pose variation (front, non-front), occlusion, image orientation, illuminating condition and
facial expression. Face detection is one of the most active search areas in computer vision because of
the many interesting applications in fields such as security, surveillance, expression recognition,
content-based image /multimedia retrieval, human computer interaction, Law enforcement and
biometrics. Based on facial expression; one can predict about intension of person whether they are
involved in some doubtful activities or not. There are many techniques used in facial feature
detection, each one has its advantages and disadvantages [6-8]. Facial feature extraction consists in
localizing the most characteristic face components eyes, nose, mouth, etc. within images that depict
human faces. There is a general agreement that eyes are the most important facial features, thus a
great research effort has been devoted to their detection and localization[9-10]. For example, the eye
states provide important information for recognizing facial expression, human-computer interface
systems and driver fatigue monitoring system. This is due to several reasons, like;
•
•
•
•
Eyes are a crucial source of information about the state of human beings.
The eye appearance is less variant to certain typical face changes. For instance they are
unaffected by the presence of facial hair (like beard or mustaches), and are little altered
by small in-depth rotations and by transparent spectacles.
The knowledge of the eye positions allows to roughly identifying the face scale (the
interocular distance is relatively constant from subject to subject) and its in-plane
rotation.
The accurate eye localization permits to identify all the other facial features of interest.
There are two purposes of eye detection. One is to detect the existence of eyes, and another is
to accurately locate eye positions. Under most situations, the eye position is measured with the pupil
center. Current eye detection methods can be divided into two categories: active and passive eye
140
- 3. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 4, Issue 7, November – December (2013), © IAEME
detection [11]. The active detection methods use special types of illumination. Under IR
illumination, pupils show physical properties which can be utilized to localize eyes [9, 25]. This step
is essential for the initialization of many face processing techniques like face tracking, facial
expression recognition or face recognition. Among these, face recognition is a lively research area
where it has been made a great effort in the last years to design and compare different techniques.
However, it has been demonstrated that face recognition heavily suffers from an imprecise
localization of the face components. This is the reason why it is fundamental to achieve an
automatic, robust and precise extraction of the desired features prior to any further processing [11].
2. MORPHOLOGICAL IMAGE PROCESSING
Morphology indicates the branch of biology that deals with the forms of animals and plants
as well as their structure. Mathematical morphology is a tool for extracting the image components
that is useful in the representation and description of region, shape such as boundaries, skeletons, etc.
Mathematical Morphology basically works on the principle of set theory. In this thesis sets represent
objects in an image; for instance, the set of all white pixels in a binary image is a complete
morphological description of an image [15-17]. The field of mathematical morphology contributes a
wide range of operators to image processing, all these based around a few simple mathematical
concepts from set theory. The operators are particularly useful for the analysis of binary images and
some common usages include edge detection, noise removal, image enhancement and image
segmentation. Morphological techniques typically probe an image with a small shape or template
known as a structuring element. The structuring element is positioned at all possible locations in the
image and it is compared with the corresponding neighborhood of pixels. Morphological operations
differ in how they carry out this comparison. The structuring element is sometimes called the kernel,
but in this work, this term is reserved for the similar objects used in convolutions. It consists of a
pattern specified as the coordinates of a number of discrete points relative to some origin. Normally
Cartesian coordinates are used and so a convenient way of representing the element is as a small
image on a rectangular grid.
3. STEPS OF ALGORITHM USED
Each image of the database is individually processed and analyzed for locating the eyes.
Locating eyes on the face image is one of the most important and crucial step in face and gesture
recognition. This work uses basic morphological processing instead of complicated transforms to
locate facial features. Thus input is individual image from database and output is location of eye
denoted as asterisk and retrieved as approximate center of mass as pixel location (x, y). The entire
simulation process is explained in the following steps.
(i)
First the face image is read from the algorithm data base of 213 images.
(ii)
Then the face image is filtered for various features.
(iii) The gray level stretching is used to highlight the major facial features.
(iv)
The Edges are found for the facial features obtained from step-3.
(v)
The minor features are removed using morphological operation of dilation with a square
structuring element.
(vi)
The skeleton is found from the image obtained in the previous step and fill the holes to
obtained only selective major features from the face image.
(vii) The image is now divided vertically in two halves. The upper half is used for the
processing next steps.
(viii) Now we indentify and label the closed area in upper half.
(ix)
Find the centroids of all closed areas obtained in upper half image.
141
- 4. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 4, Issue 7, November – December (2013), © IAEME
(x)
(xi)
Find the closed areas with smallest distance as a possible eye candidate.
Remove the above selected areas and find the next set of closed areas with smallest
distance as another possible eye candidate. Repeat until no more sets are available.
Of the various eye candidates, the ones closest to the lower face is identified as the final eye location.
4. SIMULATION PROCESS
Here we are going to simulate the process of eye detection of the image-1 taken from the
JAFFE database. The whole process is undertaken in six stages as explained below.
Stage -1: The Image-1 is read from the database using MATLAB as shown in Fig.1
Fig. 1 Original Input Image
Stage-2: In this stage the readout image is filtered for various features and also the illumination of
the image is normalized. This is done using morphological algorithm based on MATLAB
programming. In this step Major features of the face are highlighted by contrast stretching using grey
level stretching. The result of the same are shown in Fig. 2.
Fig.2 Filtering the Image
142
- 5. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 4, Issue 7, November – December (2013), © IAEME
Stage-3: In third stage the edges of the image obtained in the second step is detected. The best edge
detector is Canny which detects even minute edges but in my work the main focus is on major
features so, Prewitt edge detector is used, as shown in fig 4.3
Fig. 3 Edge of the Major Features
Stage-4: In the fourth stage the image obtained in the previous step is further processed to suppress
more minor details by using morphological operation of dilation with a square structuring element.
This morphological operation when applied to the image, it removes the minor details of size less
than 30 pixels are removed, as shown in Fig. 4.
Fig.4 Minor Details Removed
Stage-5: The fifth stage the algorithm divides the image in two halves vertically and analyses the
upper half. The algorithm identifies & labels the closed area in upper half and also finds the centroids
of all closed areas obtained in upper half image. Then the algorithm finds the closed areas with
smallest distance as a possible candidate’s eye. These selected areas are marked and then the
143
- 6. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 4, Issue 7, November – December (2013), © IAEME
algorithm finds the next set of closed areas with smallest distance as another possible candidate’s
eye. This stage is repeated until no more sets are available, as shown in Fig. 5.
Fig. 5 Possible Eye Candidates
Stage-6: As a result of the fifth stage the algorithm detected the eyebrows and eyes in upper half
image. Basically the algorithm treated both eyebrows and eyes as possible candidate’s eye. Of the
various possible detected candidates’ eye, the ones closest to the lower face are identified as the final
eye location, as shown in Fig. 6.
Fig. 6 Final Detected Eyes
Similarly the same simulation process is applied to all remain 212 images of database. After
simulation the output of 213 images we have studied that the correct eye detection is successfully
obtained after applying the morphological process. Thus we get
Total No. of input samples
= 213 Images
Total Sample with Correct Eye Detection
= 177 Images
Total Sample with Incorrect Eye detection
= 36 Images
The average % of correct of Eye Detection
= 83%.
144
- 7. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 4, Issue 7, November – December (2013), © IAEME
5. CONCLUSION AND FUTURE SCOPE
In this work the eye detection using morphological process has been carried out using
MATLAB. A total number of 213 images of faces of 10 Japanese female models posing expressions
of happiness, sadness, fear, anger, surprise, disgust, and neutrality has been taken from JAFFE
(Japanese Female Facial Expressions) data base. The algorithm is applied in 6 stages. For the
simulation and analysis of the result we can conclude that this is a simple and effective technique to
detect the eyes on the face. After studying the simulation result we have knew that the accuracy
achieved is 83%. The technique is independent of the facial gesture and person. The algorithm
processing time is reduced as the only upper half of the image is used to locate the eyes. This
Algorithm can be used as first step for facial gesture recognition model and face recognition models.
Mouth nose and even other facial features can be located on the face. Further artificial neural
network may be employed to improve the efficiency.
REFERENCES
[1]
Yongzhong Lu, Jingli Zhou, Shengsheng Yu “A Survey of Face Detection, Extraction and
Recognition”.
[2] Rabia Jafri and Hamid R. Arabnia “A Survey of Face Recognition Techniques” Journal of
Information Processing Systems, Vol.5, No.2, June 2009 41.
[3] Shang-Hung Lin “An Introduction to Face Recognition Technology”.
[4] V. Gomathi, Dr. K. Ramar, and A. Santhiyaku Jeevakumar “Human Facial Expression
Recognition using MANFIS Model” International Journal of Computer Science and
Engineering 3:2 2009.
[5] A. Habibizad navin1, Mir Kamal Mirnia”A New Algorithm to Classify Face Emotions
through Eye and Lip Features by Using Particle Swarm Optimization” 2012 4th International
Conference on Computer Modeling and Simulation (ICCMS 2012) IPCSIT vol.22 (2012) ©
(2012) IACSIT Press, Singapore.
[6] Xiaoyi Jiang and Yung-Fu Chen “Facial Image Processing”.
[7] Sina Jahanbin, Hyohoon Choi, Rana Jahanbin, Alan C. Bovik “Automated Facial Feature
Detection and Face Recognition Using Gabor Features on Range And Portrait Images”.
[8] Supriya Kapoor, Shruti Khanna, Rahul Bhatia “Facial Gesture Recognition Using Correlation
And Mahalanobisdistance” (Ijcsis) International Journal Of Computer Science And
Information Security, Vol. 7, _O. 2, 2010.
[9] Nilamani Bhoi, Mihir Narayan Mohanty “Template Matching based Eye Detection in Facial
Image” International Journal of Computer Applications (0975 – 8887)Volume 12– No.5,
December 2010.
[10] Hawlader Abdullah Al-Mamun, Nadim Jahangir, Md. Shahedul Islam and Md. Ashraful
Islam” Eye Detection in Facial Image by Genetic Algorithm Driven Deformable Template
Matching” IJCSNS International Journal of Computer Science and Network Security, VOL.9
No.8, August 2009.
[11] S.P. Khandait, P.D. Khandait and Dr.R.C.Thool “An Efficient Approach to Facial Feature
Detection for Expression Recognition” International Journal of Recent Trends in
Engineering, Vol 2, No. 1, November 2009.
[12] Frank Y. Shih , Chao-Fa Chuang “Automatic extraction of head and face boundaries and
facial features” Department of Computer Science, Computer Vision Laboratory, College of
Computing Sciences, New Jersey Institute of Technology, Newark, NJ 07102, USA.
145
- 8. International Journal of Advanced Research in Engineering and Technology (IJARET), ISSN 0976 –
6480(Print), ISSN 0976 – 6499(Online) Volume 4, Issue 7, November – December (2013), © IAEME
[13] Hua Gu Guangda Su Cheng Du “Feature Points Extraction from Faces” Research Institute of
Image and Graphics, Department of Electronic Engineering,Tsinghua University, Beijing,
China.
[14] A.Hemlata and Mahesh Motwani, “Single Frontal Face Detection by Finding Dark Pixel
Group and Comparing Xy-Value of Facial Features”, International Journal of Computer
Engineering & Technology (IJCET), Volume 4, Issue 2, 2013, pp. 471 - 481, ISSN Print:
0976 – 6367, ISSN Online: 0976 – 6375.
[15] Sanjeev Dhawan, Himanshu Dogra “Feature Extraction Techniques for Face Recognition”
International Journal of Engineering, Business and Enterprise Applications (IJEBEA).
[16] Zdravko Liposcak, B.Sc. Sven Loncaric, Ph.D. “Face Recognition from Profiles Using
Morphological Operations”.
[17] Zdravko Liposcak and Sven Loncaric “Face Recognition from Profiles Using Morphological
Signature Transform”.
[18] Dr. Ritu Tiwari, Dr. Anupam Shukla, Chandra Prakash, Dhirender Sharma , Rishi Kumar,
Sourabh Sharma “Face Recognition using morphological method”.
[19] Nallaperumal Krishnan, Senior Member, IEEE, S. Ravi, Member, IEEE, Krishnaveni K,
Member, IEEE, Justin Varghese, Student Member, IEEE, S.Saudia, Student Member, IEEE,
R.K.Selvakumar, Member, IEEE, A. Lenin Fred, Member, IEEE “Face Detection using
Multi-Scale Morphological Segmentation” International Journal of Imaging Science And
Engineering (Ijise).
[20] Sambhunath Biswas and Amrita Biswas, “Fourier Mellin Transform Based Face
Recognition”, International Journal of Computer Engineering & Technology (IJCET),
Volume 4, Issue 1, 2013, pp. 8 - 15, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
[21] Bernd Heisele, Tomaso Poggio, Massimiliano Pontil “Face Detection in Still Gray” A.I.
Memo No. 1687, 2000 C.B.C.L Paper No. 187 May.
[22] Jyoti Verma and Vineet Richariya, “Face Detection and Recognition Model Based on Skin
Colour and Edge Information for Frontal Face Images”, International Journal of Computer
Engineering & Technology (IJCET), Volume 3, Issue 3, 2012, pp. 384 - 393, ISSN Print:
0976 – 6367, ISSN Online: 0976 – 6375.
[23] Ijaz Khan, Hadi Abdullah and Mohd Shamian Bin Zainal, “Efficient Eyes And Mouth
Detection Algorithm Using Combination Of Viola Jones And Skin Color Pixel Detection”
International Journal of Engineering and Applied Sciences ISSN2305-8269 June 2013.
Vol. 3, No. 4.
[24] Zeynep Orman, Abdulkadir Battal and Erdem Kemer “A Study on Face, Eye Detection and
Gaze Estimation” International Journal of Computer Science & Engineering Survey (IJCSES)
Vol.2, No.3, August 2011.
[25] D. Sidibe, P. Montesinos, S. Janaqi “A simple and efficient eye detection method in color
images” Author manuscript, published in "International Conference Image and Vision
Computing New Zealand 2006, Nouvelle-Zélande (2006)".
146