SlideShare une entreprise Scribd logo
1  sur  5
Télécharger pour lire hors ligne
International Journal of Modern Engineering Research (IJMER)
               www.ijmer.com          Vol.2, Issue.4, July-Aug 2012 pp-1449-1453      ISSN: 2249-6645


                          Facial Expression Recognition for Security

                       Mrs. Ayesha Butalia, Dr. Maya Ingle, Dr. Parag Kulkarni
Abstract: Facial expressions play an important role in                  The general trend of comprehending observable
interpersonal relations as well as for security purposes.      components of facial gestures utilizes the FACS, which is
The malicious intensions of a thief can be recognized with     also a commonly used psychological approach. This
the help of his gestures, facial expressions being its major   system, as described by Ekman [12], interprets facial
part. This is because humans demonstrate and convey a lot      information in terms of Action Units, which isolate
of evident information visually rather than verbally.          localized changes in features such as eyes, lips, eyebrows
Although humans recognize facial expressions virtually         and cheeks.
without effort or delay, reliable expression recognition by             The actual process is akin to a divide-and-
machine remains a challenge.                                   conquer approach, a step-by-step isolation of facial
         A picture portrays much more than its equivalent      features, and then recombination of the interpretations of
textual description. Along this theory, we assert that         the same in order to finally arrive at a conclusion about the
although verbal and gestural methods convey valuable           emotion depicted.
information, facial expressions are unparalleled in this
regard.                                                        RELATED WORK
         In sustenance to this idea, a facial expression is              Visually conveyed information is probably the
considered to consist of deformations of facial components     most important communication mechanism used for
and their spatial relations, along with changes in the         centuries, and even today. As mentioned by Mehrabian [8],
pigmentation of the same. This paper envisages the             upto 55% of the communicative message is understood
detection of faces, localization of features thus leading to   through facial expressions. This understanding has sparked
emotion recognition in images.                                 an enormous amount of speculation in the field of facial
                                                               gestural analysis over the past couple of decades. Many
Key Terms: Facial Gestures, Action Units, Neural               different techniques and approaches have been proposed
Networks, Fiducial Points, Feature Contours.                   and implemented in order to simplify the way computers
                                                               comprehend and interact with their users.
                    INTRODUCTION                                         The need for faster and more intuitive Human-
Facial expression recognition is a basic process performed     Computer Interfaces is ever increasing with many new
by every human every day. Each one of us analyses the          innovations coming to the forefront. [1]
expressions of the individuals we interact with, to                      Azcarate et al. [11] used the concept of Motion
understand best their response to us. The malicious            Units (MUs) as input to a set of classifiers in their solution
intensions of a thief or a person tobe interviewed can be      to the facial emotion recognition problem. Their concept
recognized with the help of his gestures, facial expressions   of MUs is similar to ―Action Units‖ as described by
being its major part. In this paper we have tried to           Ekman. [12]. Chibelushi and Bourel [3] propose the use of
highlight the facial expressions for security reasons.         GMM (Gaussian Mixture Model) for pre-processing and
          In the next step to Human-Computer interaction,      HMM (Hidden Markov Model) with Neural Networks for
we endeavor to empower the computer with this ability —        AU identification.
to be able to discern the emotions depicted on a person‘s                Lajevardi and Hussain [15] suggest the idea of
visage. This seemingly effortless task for us needs to be      dynamically selecting a suitable subset of Gabor filters
broken down into several parts for a computer to perform.      from the available 20 (called Adaptive Filter Selection),
For this purpose, we consider a facial expression to           depending on the kind of noise present. Gabor Filters have
represent fundamentally, a deformation of the original         also been used by Patil et. Al [16]. Lucey et. al [17] have
features of the face.                                          devised a method to detect expressions invariant of
          On a day-to-day basis, humans commonly               registration using Active Appearance Models (AAM).
recognize emotions by characteristic features displayed as     Along with multi-class SVMs, they have used this method
part of a facial expression. For instance, happiness is        to identify expressions that are more generalized and
undeniably associated with a smile, or an upward               independent of image processing constraints such as pose
movement of the corners of the lips. This could be             and illumination.
accompanied by upward movement of the cheeks and                         Noteworthy is the work of Theo Gevers et. al [21]
wrinkles directed outward from the outer corners of the        in this field. Their facial expression recognition approach
eyes. Similarly, other emotions are characterized by other     enhances the AAMs mentioned above, as well as MUs.
deformations typical to the particular expression.             Tree-augmented Bayesian Networks (TAN), Native Bayes
          More often than not, emotions are depicted by        (NB) Classifiers and Stochastic Structure Search (SSS)
subtle changes in some facial elements rather than their       algorithms are used to effectively classify motion detected
obvious contortion to represent its typical expression as is   in the facial structure dynamically.
defined. In order to detect these slight variations induced,   Similar to the approach adopted in our system, P. Li et, al
it is important to track fine-grained changes in the facial    have utilized fiducial points to measure feature deviations
features.                                                      and OpenCV detectors for face and feature detection.

                                                    www.ijmer.com                                               1449 | Page
International Journal of Modern Engineering Research (IJMER)
               www.ijmer.com          Vol.2, Issue.4, July-Aug 2012 pp-1449-1453      ISSN: 2249-6645
          Moreover, their geometric face model orientation        employs the FACS, which is still the most widely used
is akin to our approach of polar transformations for              method to classify facial deformations.
processing faces rotated within the same plane (i.e. for                    The output of the above stage is a set of detected
tilted heads).                                                    action units. Each emotion is the equivalent of a unique set
                 SYSTEM ARCHITECTURE                              of action units, represented as rules in first-order logic.
                                                                  These rules are utilized to interpret the most probable
                                                                  emotion depicted by the subject.
                                                                           The rules can be described in first-order predicate
                                                                  logic or propositional logic in the form:
                                                                  Gesture (surprise):- eyebrows (raised) ^ eyes (wide-open)
                                                                  ^ mouth (open).
                                                                           For instance, the following diagram shows all the
                                                                  fiducial points marked on the human face. And for a
                                                                  particular emotion, a combination of points are affected
                                                                  and monitored. Like for surprise, the following action
                                                                  units are considered:
                                                                          Opening of mouth indicated by pt 19 and pt 18
                                                                           Widening of eyes shown by pt 13, pt14, pt 9 and
Fig. 1: Block Diagram of the Facial Recognition System                      pt 10.
                                                                          Raised eyebrows using pt 2 and pt 5
          The task of automatic facial expression
recognition from face image sequences is divided into the
following sub-problem areas: detecting prominent facial
features such as eyes and mouth, representing subtle
changes in facial expression as a set of suitable midlevel
feature parameters, and interpreting this data in terms of
facial gestures.
          As described by Chibelushi and Bourel [3], facial
expression recognition shares a generic structure similar to
that of facial recognition. While facial recognition requires
that the face be independent of deformations in order to
identify the individual correctly, facial expression              Fig. 2: Facial Points used for the distances definition [13]
recognition measures deformations in the facial features to
classify them. Although the face and feature detection                     Both face and feature localization are challenging
stages are shared by these techniques, their eventual aim is      because of the face/features manifold owing to the high
different.                                                        inter-personal variability (e.g. gender and race), the
          Face detection is widely applied through the HSV        intrapersonal    changes     (e.g.    pose,     expression,
segmentation technique. This step narrows the region of           presence/absence of glasses, beard, mustaches), and the
interest of the image down to the facial region¸ eliminating      acquisition conditions (e.g. illumination and image
unnecessary information for faster processing.                    resolution).
          Analyzing this region (facial) helps in locating
the predominant seven regions of interest (ROIs), viz. two        Artificial Neural Networks and Fuzzy Sets
eyebrows, two eyes, nose, mouth and chin. Each of these                     We observed that other applications used an
regions is then filtered to obtain the desired facial features.   ANN [13] with one hidden layer [1] to implement
Following up this step, to spatially sample the contour of a      recognition of gestures for a single facial feature (lips)
certain permanent facial feature, one or more facial-feature      alone. Hence it is but obvious that one such ANN will be
detectors are applied to the pertinent ROI. For example,          necessary for each independent feature or Action Unit that
the contours of the eyes are localized in the ROIs of the         will be interpreted for emotion analysis. We intend to
eyes by using a single detector representing an adapted           utilize the concept of Neural Networks to assist the system
version of a hierarchical-perception feature location             to learn (in a semi-supervised manner) how to customize
method [13]. We have performed feature extraction by a            its recognition of expressions for different specialized
method that extracts features based on Haar-like features,        cases or environments.
and classifies them using a tree-like decision structure.                   Fuzzy logic[22] (and fuzzy sets) is a form of multi-
This is similar to the process described by Borsboom et. al       valued logic derived from fuzzy set theory to deal with
[7]
    .                                                             reasoning that is approximate rather than accurate.
          The contours of the facial features, generated by       Pertaining to facial expressions, fuzzy logic comes into
the facial feature detection method, are utilized for further     picture when a certain degree of deviation is to be made
analysis of shown facial expressions. Similar to the              permissible from the expected set of values for a particular
approach taken by Pantic and Rothkrantz [2], we carry out         facial expression.
feature points‘ extraction under the assumption that the
face images are non-occluded and in frontal view. We
extract 22 fiducial points, which constitute the action units,
once grouped by the ROI they belong to. The last stage
                                                      www.ijmer.com                                                1450 | Page
International Journal of Modern Engineering Research (IJMER)
               www.ijmer.com          Vol.2, Issue.4, July-Aug 2012 pp-1449-1453      ISSN: 2249-6645
                                                                            The process of emotion detection can be further
                                                                 enhanced to train the system using semi-supervised
                                                                 learning. Since past data about a particular subject‘s
                                                                 emotive expression can be stored and be made available to
                                                                 the system later, it may serve to improve the machines
                                                                 ‗understanding‘ and be used as a base for inferring more
                                                                 about a person‘s emotional state from a more experienced
                                                                 perspective.
                                                                            One approach to supervised learning is on
  Fig. 3: The Two-Layer Neurofuzzy Architecture [13]             inception of the system, for AU and gesture mapping.
                                                                 Here, the user will give feedback to the system with
          This is necessary because each person‘s facial         respect to his/her images flashed on the screen in the form
muscles contort in slightly different ways while showing         of gesture name. This calibration process may be allowed
similar emotions. Thus the deviated features may have            approximately a minute or two to complete. In this way
slightly different coordinates, yet still the algorithm should   the user is effectively tutoring the system to recognize
work irrespective of these minor differences. Fuzzy logic        basic gestures so that it can build up on this knowledge in
is used to increase the success rate of the process of           a semi-supervised manner later on. This feedback will be
recognition of facial expressions across different people.       stored in a database along with related entries for the
A neuro-fuzzy architecture incorporates both the                 parameters extracted from the region of interest (ROI) of
techniques, merging known values and rules with variable         the image displayed. The same feedback is also fed into
conditions, as shown in Fig. 3. Chatterjee and Shi [18] have     the neuro-fuzzy network structure which performs the
also utilized the neuro-fuzzy concept to classify                translation of the input parameters to obtain the eventual
expressions, as well as to assign weights to facial features     result. It is used to shift the baseline for judging the input
depending on their importance to the emotion detection           parameters and classifying them as part of the processing
process.                                                         involved in the system. Now, for further input received,
                                                                 the neural network behaves in tandem with its new,
                   POTENTIAL ISSUES                              personalized rule set.
                                                                            The user can also request for training examples to
  Geographical Versatility
                                                                 be gathered at discrete time intervals and provide a label
          One of the fundamental problems faced by any
                                                                 for each. This can be combined with the displacements
software that accepts images of the user as an input is that
                                                                 output by the feature extraction phase and added as a new
it needs to account for the geographical variations bound
                                                                 example to the training set. Another way to increase the
to occur in it. Though people coming from one
                                                                 personalized information is to maintain an active history of
geographical area share certain physiological similarities,
                                                                 the user, similar to a session. This data is used further on
they tend to be distinct from those who belong to another
                                                                 to implement a context-based understanding of the user,
region. However, these physical differences will create a
                                                                 and predict the likelihoods of emotions that may be felt in
lot of inconsistencies in the input obtained.
                                                                 the near future.
          To tackle this issue, the software must employ a
                                                                            Calibration and Personalization would further
technique that helps increase the scale of deviations
                                                                 increase the recognition performance and allow a tradeoff
occurring, so that divergence to a certain degree can be
                                                                 between generality and precision to be made. By applying
rounded off and the corresponding feature recognized.
                                                                 these methods, the system in focus will be able to
Support Vector Machines [10] help the software do this.
                                                                 recognize the user‘s facial gestures with very high
          Fundamentally, a set of SVMs will be used to
                                                                 accuracy; this approach is highly suitable for developing a
specify and incorporate rules for each facial feature and
                                                                 personalized, user-specific application.
related possible gestures. The more the rules for a
                                                                 Dealing with Missing Values
particular feature, the greater will be the degree of
                                                                            The uncontrolled conditions of the real world and
accuracy for even the smallest of micro-expressions and
                                                                 the numerous methods that can be employed to obtain the
for the greater resolution images.
                                                                 information for processing can cause extensive occlusions
          Originally designed for binary classification,
                                                                 and create a vast scope for missing values in the input.
there are currently two types of approaches [10] for multi-
                                                                 Back in the history of linear PCA, the problem of missing
class SVMs. The first approach is to construct and
                                                                 values has long been considered as an essential issue and
combine several binary classifiers (also called ―one-per-
                                                                 investigated in depth. [24][25] Thus in this paper the problem
class‖ method) while the other is to consider all data in
                                                                 of missing values is tackled by taking into consideration
one optimization formulation named one-one way. Also,
                                                                 the facial symmetry of the subject.
SVMs can be used to form slightly customized rules for
                                                                            Though the human faces aren‘t exactly
drastically different geographical populations. Once the
                                                                 symmetrical for most expressions, we create perfectly
nativity of the subject is identified, the appropriate multi-
                                                                 symmetrical faces using the left or right (depending on
class SVM can be used for further analysis and processing.
                                                                 which one is clearer) side of the face. Let‘s call it the
          Thus, this method uses SVMs to take care of the
                                                                 reference image. The creation of the Reference image is
differences and inconsistencies due to the environmental
                                                                 shown below using the left and right sides of the face
variations.
                                                                 respectively.
Personalization and Calibration


                                                     www.ijmer.com                                                1451 | Page
International Journal of Modern Engineering Research (IJMER)
                www.ijmer.com          Vol.2, Issue.4, July-Aug 2012 pp-1449-1453      ISSN: 2249-6645
                                                                As is evident from Table 1, our system shows greater
                                                                accuracy for Joy and Surprise with 73.3% and 80.0%
                                                                accuracy rates respectively. The average accuracy of our
                                                                system is 71.7%.
                                                                These results can be further improved by personalizing the
                                                                system as discussed in the previous section. A
                                                                personalized system is tuned to its user‘s specific facial
     Fig 4: Left: original face, Centre: Symmetrical face       actions, and hence, after a calibration period, can be
    created using the right side, Right: Symmetrical face       expected to give an average accuracy of up to 90%.
                  created using the left side.
                                                                FUTURE SCOPE AND APPLICATIONS
          Now the original image is first scanned for the                As human facial expression recognition is a very
 occluded or blank areas in the face, that constitute the set   elementary process, it is useful to evaluate the mood or
 of missing values. After all the locations of blockages have   emotional state of a subject under observation. As such,
 been identified, the Reference image is super-imposed on       tremendous potential lies untapped in this domain. The
 this scanned image. During the superimposition the facial      basic idea of a machine being able to comprehend the
 symmetry is therefore utilized to complete the input values    human emotive state can be put to use in innumerable
 wherever necessary, and the missing values taken care of.      scenarios, a few of which we have mentioned here.
 Overcoming Time Constraints
          The main challenge in analyzing the face, for the      The ability to detect and track a user‘s state of mind
 expression being portrayed, is the amount of processing          has the potential to allow a computing system to offer
 time required. This becomes a bottleneck in real time            relevant information when a user needs help – not just
 scenarios. One way of overcoming this time constraints is        when the user requests help, for instance, the change
 dividing the image into various regions of interest and          in the Room Ambience by judging the mood of the
 analyzing each in parallel independently. For instance, the      person entering it.
 face can be divided into regions of the mouth; chin, eyes,      Help people in emotion-related research to improve
 eyebrows etc. and the parallel analysis of these can be          the processing of emotion data.
 combined to get the final result. This approach will            Applications in surveillance and security. For
 guarantee a gain in the throughput of the system.                instance, computer models obtained up to 71% correct
                                                                  classification of innocent or guilty participants based
                                                                  on the macro features extracted from the video camera
                                                                  footage.
                                                                 In this regard, lie detection amongst criminal suspects
                                                                  during interrogation is also a useful aspect in which
                                                                  this system can form a base. It is proven that facial
                                                                  cues more often than not can give away a lie to the
                                                                  trained eye.
    Fig. 5: Delegation of Image Components to Parallel           Patient Monitoring in hospitals to judge the
                         Processors                               effectiveness of prescribed drugs is one application to
                                                                  the Health Sector. In addition to this, diagnosis of
                         RESULTS                                  diseases that alter facial features and psychoanalysis
           In this section we present the resulting               of patient mental state are further possibilities.
 effectiveness of our systems facial expression recognition      Clever Marketing is feasible using emotional
 capability as well as subsequent emotion classification.         knowledge of a patron and can be done to suit what a
 The Confusion Matrix below shows that for a dataset of           patron might be in need of based on his/her state of
 images depicting a particular emotion, how many were             mind at any instant.
 identified correctly, and how many were confused with           Detecting symptoms such as drowsiness, fatigue, or
 other emotions. The dataset comprised images taken of an         even inebriation can be done using this system. Thus,
 expert user, having knowledge of what a typical emotion is       by helping in the process of Driver Monitoring, this
 expected to show as per universal norms described by             system can play an integral role in reducing road
 Ekman [12]. Our dataset comprised 60 images of Indian            mishaps to a great extent.
 faces, with 15 each depicting Joy, Sorrow, Surprise and
 Neutral emotive states.                                                  Also noteworthy of mention is the advantage of
                                                                reading emotions through facial cues, as our system does,
                                    Total Accuracy: 71.7%       over reading them through study of audio data of human
  Table 1: Person-dependent Confusion Matrix for training       voices. Not only does the probability of noise affecting
           and test data supplied by an expert user             and distorting the input reduce for our system, but also
                                                                there are fewer ways to disguise or misread from visual
Expression    Joy   Sorrow    Neutral   Surprise   Overall
                                                                information as opposed to audio data, as is also stated by
   Joy         11      1         1         2       73.3%
                                                                Busso et. al [23] . Hence facial expressions would form a
 Sorrow        2      10         2         1       66.7%        vital part of a multimodal system for emotion analysis as
  Neural       3       2        10         0       66.7%        well.
 Surprise      2       0         1        12       80.0%
                                                     www.ijmer.com                                            1452 | Page
International Journal of Modern Engineering Research (IJMER)
               www.ijmer.com          Vol.2, Issue.4, July-Aug 2012 pp-1449-1453      ISSN: 2249-6645
                       CONCLUSION                                  [13] A. Raouzaiou et. Al, ―A Hybrid Intelligence System for
          As is quite evident after plenty of research and              Facial Expression Recognition‖ EUNITE 2002 482-490.
deliberation, gaining insight on what a person may be              [14] Y. Liu et. al, ―Facial Asymmetry Quantification for
                                                                        Expression Invariant Human Identification‖
feeling is very valuable for many reasons. The future              [15] Seyed Mehdi Lajevardi, Zahir M. Hussain ―A Novel
scope of this field is visualized to be practically limitless,          Gabor Filter Selection Based on Spectral Difference and
with more futuristic applications visible on the horizon                Minimum Error Rate for Facial Expression Recognition‖
especially in the field of Security.                                    2010 Digital Image Computing: Techniques and
          Our facial expression recognition system,                     Applications, 137-140.
utilizing neuro-fuzzy architecture is 71.7% accurate, which        [16] Ketki K. Patil et. Al, ―Facial Expression Recognition and
is approximately the level of accuracy expected from a                  Head Tracking in Video Using Gabor Filtering‖, Third
support vector machine approach as well. Every system                   International Conference on Emerging Trends in
has its limitations. Although this particular implementation            Engineering and Technology, 152-157.
                                                                   [17] Patrick Lucey, Simon Lucey, Jeffrey Cohn, ―Registration
of facial expression recognition may perform less than                  Invariant Representations for Expression Detection‖ 2010
entirely accurate as per the end users‘ expectations, it is             Digital Image Computing: Techniques and Applications,
envisioned to contribute significantly to the field, upon               255-261.
which similar work can be furthered and enhanced. Our              [18] Suvam Chatterjee, Hao Shi, ―A Novel Neuro-Fuzzy
aim in forming such a system is to form a standard                      Approach to Human Emotion Determination‖ 2010
protocol that may be used as a component in many of the                 Digital Image Computing: Techniques and Applications,
applications that will no doubt require an emotion-based                282-287.
HCI.                                                               [19] P. Li et. al, ―Automatic Recognition of Smiling and
          Empowering computers in this way has the                      Neutral Facial Expressions‖ 2010 Digital Image
                                                                        Computing: Techniques and Applications, 581-586.
potential of changing the very way a machine ―thinks‖. It          [20] Irfan A. Essa, ―Coding, Analysis, Interpretation and
gives them the ability to understand humans as ‗feelers‘                Recognition of Facial Expressions‖ IEEE Transactions on
rather than ‗thinkers‘. This in mind, this system can even              Pattern Recognition and Machine Intelligence, July 1997,
be implemented in the context of Artificial Intelligence.               757-763.
As part of the relentless efforts of many to create                [21] Roberto Valenti, Nicu Sebe, Theo Gevers, ―Facial
intelligent machines, facial expressions and emotions have              Expression Recognition: A Fully Integrated Approach‖
and always will play a vital role.                                      ICIAPW '07 Proceedings of the 14th International
                                                                        Conference of Image Analysis and Processing, 2007, 125-
                       REFERENCES                                       130.
 [1]  Piotr Dalka, Andrzej Czyzewski, ―Human-Computer              [22] L. A. Zadeh, ―Fuzzy Sets‖ Information and Control 8,
      Interface Based on Visual Lip Movement and Gesture                1965, 338-353.
      Recognition‖, International Journal of Computer Science      [23] Carlos Busso et. al, ―Analysis of Emotion Recognition
      and Applications, ©Technomathematics Research                     using Facial Expressions, Speech and Multimodal
      Foundation Vol. 7 No. 3, pp. 124 - 139, 2010                      Information‖, ACM 2004.
 [2] M. Pantic, L.J.M. Rothkrantz, ―Facial Gesture                 [24] N.F Troje et al, ―How is bilateral symmetry of human
      Recognition in face image sequences: A study on facial            faces used for recognition of novel views‖, Vision
      gestures typical for speech articulation‖ Delft University        Research 1998, 79-89
      of Technology.                                               [25] R Thornhill et al, ―Facial Attractiveness‖, Trans. In
 [3] Claude C. Chibelushi, Fabrice Bourel, ―Facial Expression           Cognitive Sciences, December 1999.
      Recognition: A Brief Overview‖, 1-5 ©2002.
 [4] Aysegul Gunduz, Hamid Krim ―Facial Feature Extraction
      Using Topological Methods‖ © IEEE, 2003
 [5] Kyungnam Kim, ―Face Recognition Using Principal
      Component Analysis‖
 [6] Abdallah S. Abdallah, A. Lynn Abbott, and Mohamad
      Abou El-Nasr, ―A New Face Detection Technique using
      2D DCT and Self Organizing Feature Map‖, World
      Academy of Science, Engineering and Technology 27
      2007
 [7] Sander Borsboom, Sanne Korzec, Nicholas Piel,
      ―Improvements on the Human Computer Interaction
      Software: Emotion‖, Methodology, 2007, 1-8.
 [8] A. Mehrabian, Communication without Words,
      Psychology Today 2 (4) (1968) 53-56.
 [9] M. Pantic, L.J.M. Rothkrantz, ―Expert System for
      Automatic for Automatic Analysis of Facial Expressions‖
      Image and Vision Computing 18 (2000) 881-905.
 [10] Li Chaoyang, Liu Fang, Xie Yinxiang, ―Face Recognition
      using Self-Organizing Feature Maps and Support Vector
      Machines‖, Proceedings of the Fifth ICCIMA, 2003.
 [11] Aitor Azcarate, Felix Hageloh, Koen van de Sande,
      Roberto     Valenti,    ―Automatic      Facial   Emotion
      Recognition‖ Univerity of Amsterdam, 1-10, June 2005
 [12] P. Ekman ―Strong evidence for universals in Facial
      Expressions‖ Psychol. Bull., 115(2): 268-287, 1994.

                                                       www.ijmer.com                                                1453 | Page

Contenu connexe

Tendances

Local feature extraction based facial emotion recognition: A survey
Local feature extraction based facial emotion recognition: A surveyLocal feature extraction based facial emotion recognition: A survey
Local feature extraction based facial emotion recognition: A survey
IJECEIAES
 
4837410 automatic-facial-emotion-recognition
4837410 automatic-facial-emotion-recognition4837410 automatic-facial-emotion-recognition
4837410 automatic-facial-emotion-recognition
Ngaire Taylor
 
Facial expression using 3 d animation
Facial expression using 3 d animationFacial expression using 3 d animation
Facial expression using 3 d animation
IAEME Publication
 
Facial expression using 3 d animation
Facial expression using 3 d animationFacial expression using 3 d animation
Facial expression using 3 d animation
iaemedu
 

Tendances (18)

Deep Neural NEtwork
Deep Neural NEtworkDeep Neural NEtwork
Deep Neural NEtwork
 
IRJET - Survey on Different Approaches of Depression Analysis
IRJET - Survey on Different Approaches of Depression AnalysisIRJET - Survey on Different Approaches of Depression Analysis
IRJET - Survey on Different Approaches of Depression Analysis
 
A new alley in Opinion Mining using Senti Audio Visual Algorithm
A new alley in Opinion Mining using Senti Audio Visual AlgorithmA new alley in Opinion Mining using Senti Audio Visual Algorithm
A new alley in Opinion Mining using Senti Audio Visual Algorithm
 
Efficient Facial Expression and Face Recognition using Ranking Method
Efficient Facial Expression and Face Recognition using Ranking MethodEfficient Facial Expression and Face Recognition using Ranking Method
Efficient Facial Expression and Face Recognition using Ranking Method
 
A Survey on Local Feature Based Face Recognition Methods
A Survey on Local Feature Based Face Recognition MethodsA Survey on Local Feature Based Face Recognition Methods
A Survey on Local Feature Based Face Recognition Methods
 
Local feature extraction based facial emotion recognition: A survey
Local feature extraction based facial emotion recognition: A surveyLocal feature extraction based facial emotion recognition: A survey
Local feature extraction based facial emotion recognition: A survey
 
Recognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse CodingRecognition of Facial Emotions Based on Sparse Coding
Recognition of Facial Emotions Based on Sparse Coding
 
A fuzzy logic based on sentiment
A fuzzy logic based on sentimentA fuzzy logic based on sentiment
A fuzzy logic based on sentiment
 
IRJET-Facial Expression Recognition using Efficient LBP and CNN
IRJET-Facial Expression Recognition using Efficient LBP and CNNIRJET-Facial Expression Recognition using Efficient LBP and CNN
IRJET-Facial Expression Recognition using Efficient LBP and CNN
 
SPEECH BASED EMOTION RECOGNITION USING VOICE
SPEECH BASED  EMOTION RECOGNITION USING VOICESPEECH BASED  EMOTION RECOGNITION USING VOICE
SPEECH BASED EMOTION RECOGNITION USING VOICE
 
4837410 automatic-facial-emotion-recognition
4837410 automatic-facial-emotion-recognition4837410 automatic-facial-emotion-recognition
4837410 automatic-facial-emotion-recognition
 
Elicitation of Apt Human Emotions based on Discrete Wavelet Transform in E-Le...
Elicitation of Apt Human Emotions based on Discrete Wavelet Transform in E-Le...Elicitation of Apt Human Emotions based on Discrete Wavelet Transform in E-Le...
Elicitation of Apt Human Emotions based on Discrete Wavelet Transform in E-Le...
 
Facial expression using 3 d animation
Facial expression using 3 d animationFacial expression using 3 d animation
Facial expression using 3 d animation
 
Facial expression using 3 d animation
Facial expression using 3 d animationFacial expression using 3 d animation
Facial expression using 3 d animation
 
IRJET- Sentimental Analysis on Audio and Video using Vader Algorithm -Monali ...
IRJET- Sentimental Analysis on Audio and Video using Vader Algorithm -Monali ...IRJET- Sentimental Analysis on Audio and Video using Vader Algorithm -Monali ...
IRJET- Sentimental Analysis on Audio and Video using Vader Algorithm -Monali ...
 
J01116164
J01116164J01116164
J01116164
 
IRJET- Sentimental Analysis on Audio and Video
IRJET- Sentimental Analysis on Audio and VideoIRJET- Sentimental Analysis on Audio and Video
IRJET- Sentimental Analysis on Audio and Video
 
Speech emotion recognition
Speech emotion recognitionSpeech emotion recognition
Speech emotion recognition
 

En vedette

Data and Information Integration: Information Extraction
Data and Information Integration: Information ExtractionData and Information Integration: Information Extraction
Data and Information Integration: Information Extraction
IJMER
 
Analysis of Multicast Routing Protocols: Puma and Odmrp
Analysis of Multicast Routing Protocols: Puma and OdmrpAnalysis of Multicast Routing Protocols: Puma and Odmrp
Analysis of Multicast Routing Protocols: Puma and Odmrp
IJMER
 
Numerical Analysis of Fin Side Turbulent Flow for Round and Flat Tube Heat E...
Numerical Analysis of Fin Side Turbulent Flow for Round and  Flat Tube Heat E...Numerical Analysis of Fin Side Turbulent Flow for Round and  Flat Tube Heat E...
Numerical Analysis of Fin Side Turbulent Flow for Round and Flat Tube Heat E...
IJMER
 
The everlasting gospel
The everlasting gospelThe everlasting gospel
The everlasting gospel
Robert Taylor
 
Bi31199203
Bi31199203Bi31199203
Bi31199203
IJMER
 
Experimental Investigation of Electrode Wear in Die-Sinking EDM on Different ...
Experimental Investigation of Electrode Wear in Die-Sinking EDM on Different ...Experimental Investigation of Electrode Wear in Die-Sinking EDM on Different ...
Experimental Investigation of Electrode Wear in Die-Sinking EDM on Different ...
IJMER
 
Ijmer 46066571
Ijmer 46066571Ijmer 46066571
Ijmer 46066571
IJMER
 

En vedette (20)

Data and Information Integration: Information Extraction
Data and Information Integration: Information ExtractionData and Information Integration: Information Extraction
Data and Information Integration: Information Extraction
 
On Normal and Unitary Polynomial Matrices
On Normal and Unitary Polynomial MatricesOn Normal and Unitary Polynomial Matrices
On Normal and Unitary Polynomial Matrices
 
Modelling the Effect of Industrial Effluents on Water Quality: “A Case Study ...
Modelling the Effect of Industrial Effluents on Water Quality: “A Case Study ...Modelling the Effect of Industrial Effluents on Water Quality: “A Case Study ...
Modelling the Effect of Industrial Effluents on Water Quality: “A Case Study ...
 
A Novel Method for Creating and Recognizing User Behavior Profiles
A Novel Method for Creating and Recognizing User Behavior ProfilesA Novel Method for Creating and Recognizing User Behavior Profiles
A Novel Method for Creating and Recognizing User Behavior Profiles
 
A brave new web - A talk about Web Components
A brave new web - A talk about Web ComponentsA brave new web - A talk about Web Components
A brave new web - A talk about Web Components
 
Ijtihad
IjtihadIjtihad
Ijtihad
 
Analysis of Multicast Routing Protocols: Puma and Odmrp
Analysis of Multicast Routing Protocols: Puma and OdmrpAnalysis of Multicast Routing Protocols: Puma and Odmrp
Analysis of Multicast Routing Protocols: Puma and Odmrp
 
Numerical Analysis of Fin Side Turbulent Flow for Round and Flat Tube Heat E...
Numerical Analysis of Fin Side Turbulent Flow for Round and  Flat Tube Heat E...Numerical Analysis of Fin Side Turbulent Flow for Round and  Flat Tube Heat E...
Numerical Analysis of Fin Side Turbulent Flow for Round and Flat Tube Heat E...
 
The everlasting gospel
The everlasting gospelThe everlasting gospel
The everlasting gospel
 
Bi31199203
Bi31199203Bi31199203
Bi31199203
 
Experimental Investigation of Electrode Wear in Die-Sinking EDM on Different ...
Experimental Investigation of Electrode Wear in Die-Sinking EDM on Different ...Experimental Investigation of Electrode Wear in Die-Sinking EDM on Different ...
Experimental Investigation of Electrode Wear in Die-Sinking EDM on Different ...
 
GPS cycle slips detection and repair through various signal combinations
GPS cycle slips detection and repair through various signal combinationsGPS cycle slips detection and repair through various signal combinations
GPS cycle slips detection and repair through various signal combinations
 
Implementation of Wide Band Frequency Synthesizer Base on DFS (Digital Frequ...
Implementation of Wide Band Frequency Synthesizer Base on  DFS (Digital Frequ...Implementation of Wide Band Frequency Synthesizer Base on  DFS (Digital Frequ...
Implementation of Wide Band Frequency Synthesizer Base on DFS (Digital Frequ...
 
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...
 
Regeneration of Liquid Desiccant in Solar Passive Regenerator with Enhanced ...
Regeneration of Liquid Desiccant in Solar Passive Regenerator  with Enhanced ...Regeneration of Liquid Desiccant in Solar Passive Regenerator  with Enhanced ...
Regeneration of Liquid Desiccant in Solar Passive Regenerator with Enhanced ...
 
Ijmer 46066571
Ijmer 46066571Ijmer 46066571
Ijmer 46066571
 
Integration of Struts & Spring & Hibernate for Enterprise Applications
Integration of Struts & Spring & Hibernate for Enterprise ApplicationsIntegration of Struts & Spring & Hibernate for Enterprise Applications
Integration of Struts & Spring & Hibernate for Enterprise Applications
 
Experimental Investigations of Exhaust Emissions of four Stroke SI Engine by ...
Experimental Investigations of Exhaust Emissions of four Stroke SI Engine by ...Experimental Investigations of Exhaust Emissions of four Stroke SI Engine by ...
Experimental Investigations of Exhaust Emissions of four Stroke SI Engine by ...
 
Wheel Speed Signal Time-Frequency Transform and Tire Pressure Monitoring Sys...
Wheel Speed Signal Time-Frequency Transform and Tire  Pressure Monitoring Sys...Wheel Speed Signal Time-Frequency Transform and Tire  Pressure Monitoring Sys...
Wheel Speed Signal Time-Frequency Transform and Tire Pressure Monitoring Sys...
 
Reduction of Topology Control Using Cooperative Communications in Manets
Reduction of Topology Control Using Cooperative Communications in ManetsReduction of Topology Control Using Cooperative Communications in Manets
Reduction of Topology Control Using Cooperative Communications in Manets
 

Similaire à A02414491453

Facial expression using 3 d animation
Facial expression using 3 d animationFacial expression using 3 d animation
Facial expression using 3 d animation
iaemedu
 
Facial expression using 3 d animation
Facial expression using 3 d animationFacial expression using 3 d animation
Facial expression using 3 d animation
iaemedu
 
1 facial expression 9 (1)
1 facial expression 9 (1)1 facial expression 9 (1)
1 facial expression 9 (1)
prjpublications
 
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Sana Nasar
 

Similaire à A02414491453 (20)

Facial expression using 3 d animation
Facial expression using 3 d animationFacial expression using 3 d animation
Facial expression using 3 d animation
 
Facial expression using 3 d animation
Facial expression using 3 d animationFacial expression using 3 d animation
Facial expression using 3 d animation
 
1 facial expression 9 (1)
1 facial expression 9 (1)1 facial expression 9 (1)
1 facial expression 9 (1)
 
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...Face expression recognition using Scaled-conjugate gradient Back-Propagation ...
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...
 
PERFORMANCE EVALUATION AND IMPLEMENTATION OF FACIAL EXPRESSION AND EMOTION R...
PERFORMANCE EVALUATION AND IMPLEMENTATION  OF FACIAL EXPRESSION AND EMOTION R...PERFORMANCE EVALUATION AND IMPLEMENTATION  OF FACIAL EXPRESSION AND EMOTION R...
PERFORMANCE EVALUATION AND IMPLEMENTATION OF FACIAL EXPRESSION AND EMOTION R...
 
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...Emotion Recognition from Facial Expression Based on Fiducial Points Detection...
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...
 
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
 
Facial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns finalFacial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns final
 
Image processing
Image processingImage processing
Image processing
 
A Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression DetectionA Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression Detection
 
Ijip 965SVM Based Recognition of Facial Expressions Used In Indian Sign Language
Ijip 965SVM Based Recognition of Facial Expressions Used In Indian Sign LanguageIjip 965SVM Based Recognition of Facial Expressions Used In Indian Sign Language
Ijip 965SVM Based Recognition of Facial Expressions Used In Indian Sign Language
 
Facial Expression Recognition System: A Digital Printing Application
Facial Expression Recognition System: A Digital Printing ApplicationFacial Expression Recognition System: A Digital Printing Application
Facial Expression Recognition System: A Digital Printing Application
 
Facial Expression Recognition System: A Digital Printing Application
Facial Expression Recognition System: A Digital Printing ApplicationFacial Expression Recognition System: A Digital Printing Application
Facial Expression Recognition System: A Digital Printing Application
 
Emotion Recognition using Image Processing
Emotion Recognition using Image ProcessingEmotion Recognition using Image Processing
Emotion Recognition using Image Processing
 
IRJET- Real Time Facial Expression Detection using SVD and Eigen Vector
IRJET-  	  Real Time Facial Expression Detection using SVD and Eigen VectorIRJET-  	  Real Time Facial Expression Detection using SVD and Eigen Vector
IRJET- Real Time Facial Expression Detection using SVD and Eigen Vector
 
A Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on EigenfaceA Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on Eigenface
 
Mind reading computer report
Mind reading computer reportMind reading computer report
Mind reading computer report
 
USER EXPERIENCE AND DIGITALLY TRANSFORMED/CONVERTED EMOTIONS
USER EXPERIENCE AND DIGITALLY TRANSFORMED/CONVERTED EMOTIONSUSER EXPERIENCE AND DIGITALLY TRANSFORMED/CONVERTED EMOTIONS
USER EXPERIENCE AND DIGITALLY TRANSFORMED/CONVERTED EMOTIONS
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 

Plus de IJMER

Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...
Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...
Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...
IJMER
 
Intrusion Detection and Forensics based on decision tree and Association rule...
Intrusion Detection and Forensics based on decision tree and Association rule...Intrusion Detection and Forensics based on decision tree and Association rule...
Intrusion Detection and Forensics based on decision tree and Association rule...
IJMER
 
Material Parameter and Effect of Thermal Load on Functionally Graded Cylinders
Material Parameter and Effect of Thermal Load on Functionally Graded CylindersMaterial Parameter and Effect of Thermal Load on Functionally Graded Cylinders
Material Parameter and Effect of Thermal Load on Functionally Graded Cylinders
IJMER
 

Plus de IJMER (20)

A Study on Translucent Concrete Product and Its Properties by Using Optical F...
A Study on Translucent Concrete Product and Its Properties by Using Optical F...A Study on Translucent Concrete Product and Its Properties by Using Optical F...
A Study on Translucent Concrete Product and Its Properties by Using Optical F...
 
Developing Cost Effective Automation for Cotton Seed Delinting
Developing Cost Effective Automation for Cotton Seed DelintingDeveloping Cost Effective Automation for Cotton Seed Delinting
Developing Cost Effective Automation for Cotton Seed Delinting
 
Study & Testing Of Bio-Composite Material Based On Munja Fibre
Study & Testing Of Bio-Composite Material Based On Munja FibreStudy & Testing Of Bio-Composite Material Based On Munja Fibre
Study & Testing Of Bio-Composite Material Based On Munja Fibre
 
Hybrid Engine (Stirling Engine + IC Engine + Electric Motor)
Hybrid Engine (Stirling Engine + IC Engine + Electric Motor)Hybrid Engine (Stirling Engine + IC Engine + Electric Motor)
Hybrid Engine (Stirling Engine + IC Engine + Electric Motor)
 
Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...
Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...
Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...
 
Geochemistry and Genesis of Kammatturu Iron Ores of Devagiri Formation, Sandu...
Geochemistry and Genesis of Kammatturu Iron Ores of Devagiri Formation, Sandu...Geochemistry and Genesis of Kammatturu Iron Ores of Devagiri Formation, Sandu...
Geochemistry and Genesis of Kammatturu Iron Ores of Devagiri Formation, Sandu...
 
Experimental Investigation on Characteristic Study of the Carbon Steel C45 in...
Experimental Investigation on Characteristic Study of the Carbon Steel C45 in...Experimental Investigation on Characteristic Study of the Carbon Steel C45 in...
Experimental Investigation on Characteristic Study of the Carbon Steel C45 in...
 
Non linear analysis of Robot Gun Support Structure using Equivalent Dynamic A...
Non linear analysis of Robot Gun Support Structure using Equivalent Dynamic A...Non linear analysis of Robot Gun Support Structure using Equivalent Dynamic A...
Non linear analysis of Robot Gun Support Structure using Equivalent Dynamic A...
 
Static Analysis of Go-Kart Chassis by Analytical and Solid Works Simulation
Static Analysis of Go-Kart Chassis by Analytical and Solid Works SimulationStatic Analysis of Go-Kart Chassis by Analytical and Solid Works Simulation
Static Analysis of Go-Kart Chassis by Analytical and Solid Works Simulation
 
High Speed Effortless Bicycle
High Speed Effortless BicycleHigh Speed Effortless Bicycle
High Speed Effortless Bicycle
 
Microcontroller Based Automatic Sprinkler Irrigation System
Microcontroller Based Automatic Sprinkler Irrigation SystemMicrocontroller Based Automatic Sprinkler Irrigation System
Microcontroller Based Automatic Sprinkler Irrigation System
 
On some locally closed sets and spaces in Ideal Topological Spaces
On some locally closed sets and spaces in Ideal Topological SpacesOn some locally closed sets and spaces in Ideal Topological Spaces
On some locally closed sets and spaces in Ideal Topological Spaces
 
Intrusion Detection and Forensics based on decision tree and Association rule...
Intrusion Detection and Forensics based on decision tree and Association rule...Intrusion Detection and Forensics based on decision tree and Association rule...
Intrusion Detection and Forensics based on decision tree and Association rule...
 
Natural Language Ambiguity and its Effect on Machine Learning
Natural Language Ambiguity and its Effect on Machine LearningNatural Language Ambiguity and its Effect on Machine Learning
Natural Language Ambiguity and its Effect on Machine Learning
 
Evolvea Frameworkfor SelectingPrime Software DevelopmentProcess
Evolvea Frameworkfor SelectingPrime Software DevelopmentProcessEvolvea Frameworkfor SelectingPrime Software DevelopmentProcess
Evolvea Frameworkfor SelectingPrime Software DevelopmentProcess
 
Material Parameter and Effect of Thermal Load on Functionally Graded Cylinders
Material Parameter and Effect of Thermal Load on Functionally Graded CylindersMaterial Parameter and Effect of Thermal Load on Functionally Graded Cylinders
Material Parameter and Effect of Thermal Load on Functionally Graded Cylinders
 
Studies On Energy Conservation And Audit
Studies On Energy Conservation And AuditStudies On Energy Conservation And Audit
Studies On Energy Conservation And Audit
 
An Implementation of I2C Slave Interface using Verilog HDL
An Implementation of I2C Slave Interface using Verilog HDLAn Implementation of I2C Slave Interface using Verilog HDL
An Implementation of I2C Slave Interface using Verilog HDL
 
Discrete Model of Two Predators competing for One Prey
Discrete Model of Two Predators competing for One PreyDiscrete Model of Two Predators competing for One Prey
Discrete Model of Two Predators competing for One Prey
 
Application of Parabolic Trough Collectorfor Reduction of Pressure Drop in Oi...
Application of Parabolic Trough Collectorfor Reduction of Pressure Drop in Oi...Application of Parabolic Trough Collectorfor Reduction of Pressure Drop in Oi...
Application of Parabolic Trough Collectorfor Reduction of Pressure Drop in Oi...
 

A02414491453

  • 1. International Journal of Modern Engineering Research (IJMER) www.ijmer.com Vol.2, Issue.4, July-Aug 2012 pp-1449-1453 ISSN: 2249-6645 Facial Expression Recognition for Security Mrs. Ayesha Butalia, Dr. Maya Ingle, Dr. Parag Kulkarni Abstract: Facial expressions play an important role in The general trend of comprehending observable interpersonal relations as well as for security purposes. components of facial gestures utilizes the FACS, which is The malicious intensions of a thief can be recognized with also a commonly used psychological approach. This the help of his gestures, facial expressions being its major system, as described by Ekman [12], interprets facial part. This is because humans demonstrate and convey a lot information in terms of Action Units, which isolate of evident information visually rather than verbally. localized changes in features such as eyes, lips, eyebrows Although humans recognize facial expressions virtually and cheeks. without effort or delay, reliable expression recognition by The actual process is akin to a divide-and- machine remains a challenge. conquer approach, a step-by-step isolation of facial A picture portrays much more than its equivalent features, and then recombination of the interpretations of textual description. Along this theory, we assert that the same in order to finally arrive at a conclusion about the although verbal and gestural methods convey valuable emotion depicted. information, facial expressions are unparalleled in this regard. RELATED WORK In sustenance to this idea, a facial expression is Visually conveyed information is probably the considered to consist of deformations of facial components most important communication mechanism used for and their spatial relations, along with changes in the centuries, and even today. As mentioned by Mehrabian [8], pigmentation of the same. This paper envisages the upto 55% of the communicative message is understood detection of faces, localization of features thus leading to through facial expressions. This understanding has sparked emotion recognition in images. an enormous amount of speculation in the field of facial gestural analysis over the past couple of decades. Many Key Terms: Facial Gestures, Action Units, Neural different techniques and approaches have been proposed Networks, Fiducial Points, Feature Contours. and implemented in order to simplify the way computers comprehend and interact with their users. INTRODUCTION The need for faster and more intuitive Human- Facial expression recognition is a basic process performed Computer Interfaces is ever increasing with many new by every human every day. Each one of us analyses the innovations coming to the forefront. [1] expressions of the individuals we interact with, to Azcarate et al. [11] used the concept of Motion understand best their response to us. The malicious Units (MUs) as input to a set of classifiers in their solution intensions of a thief or a person tobe interviewed can be to the facial emotion recognition problem. Their concept recognized with the help of his gestures, facial expressions of MUs is similar to ―Action Units‖ as described by being its major part. In this paper we have tried to Ekman. [12]. Chibelushi and Bourel [3] propose the use of highlight the facial expressions for security reasons. GMM (Gaussian Mixture Model) for pre-processing and In the next step to Human-Computer interaction, HMM (Hidden Markov Model) with Neural Networks for we endeavor to empower the computer with this ability — AU identification. to be able to discern the emotions depicted on a person‘s Lajevardi and Hussain [15] suggest the idea of visage. This seemingly effortless task for us needs to be dynamically selecting a suitable subset of Gabor filters broken down into several parts for a computer to perform. from the available 20 (called Adaptive Filter Selection), For this purpose, we consider a facial expression to depending on the kind of noise present. Gabor Filters have represent fundamentally, a deformation of the original also been used by Patil et. Al [16]. Lucey et. al [17] have features of the face. devised a method to detect expressions invariant of On a day-to-day basis, humans commonly registration using Active Appearance Models (AAM). recognize emotions by characteristic features displayed as Along with multi-class SVMs, they have used this method part of a facial expression. For instance, happiness is to identify expressions that are more generalized and undeniably associated with a smile, or an upward independent of image processing constraints such as pose movement of the corners of the lips. This could be and illumination. accompanied by upward movement of the cheeks and Noteworthy is the work of Theo Gevers et. al [21] wrinkles directed outward from the outer corners of the in this field. Their facial expression recognition approach eyes. Similarly, other emotions are characterized by other enhances the AAMs mentioned above, as well as MUs. deformations typical to the particular expression. Tree-augmented Bayesian Networks (TAN), Native Bayes More often than not, emotions are depicted by (NB) Classifiers and Stochastic Structure Search (SSS) subtle changes in some facial elements rather than their algorithms are used to effectively classify motion detected obvious contortion to represent its typical expression as is in the facial structure dynamically. defined. In order to detect these slight variations induced, Similar to the approach adopted in our system, P. Li et, al it is important to track fine-grained changes in the facial have utilized fiducial points to measure feature deviations features. and OpenCV detectors for face and feature detection. www.ijmer.com 1449 | Page
  • 2. International Journal of Modern Engineering Research (IJMER) www.ijmer.com Vol.2, Issue.4, July-Aug 2012 pp-1449-1453 ISSN: 2249-6645 Moreover, their geometric face model orientation employs the FACS, which is still the most widely used is akin to our approach of polar transformations for method to classify facial deformations. processing faces rotated within the same plane (i.e. for The output of the above stage is a set of detected tilted heads). action units. Each emotion is the equivalent of a unique set SYSTEM ARCHITECTURE of action units, represented as rules in first-order logic. These rules are utilized to interpret the most probable emotion depicted by the subject. The rules can be described in first-order predicate logic or propositional logic in the form: Gesture (surprise):- eyebrows (raised) ^ eyes (wide-open) ^ mouth (open). For instance, the following diagram shows all the fiducial points marked on the human face. And for a particular emotion, a combination of points are affected and monitored. Like for surprise, the following action units are considered:  Opening of mouth indicated by pt 19 and pt 18  Widening of eyes shown by pt 13, pt14, pt 9 and Fig. 1: Block Diagram of the Facial Recognition System pt 10.  Raised eyebrows using pt 2 and pt 5 The task of automatic facial expression recognition from face image sequences is divided into the following sub-problem areas: detecting prominent facial features such as eyes and mouth, representing subtle changes in facial expression as a set of suitable midlevel feature parameters, and interpreting this data in terms of facial gestures. As described by Chibelushi and Bourel [3], facial expression recognition shares a generic structure similar to that of facial recognition. While facial recognition requires that the face be independent of deformations in order to identify the individual correctly, facial expression Fig. 2: Facial Points used for the distances definition [13] recognition measures deformations in the facial features to classify them. Although the face and feature detection Both face and feature localization are challenging stages are shared by these techniques, their eventual aim is because of the face/features manifold owing to the high different. inter-personal variability (e.g. gender and race), the Face detection is widely applied through the HSV intrapersonal changes (e.g. pose, expression, segmentation technique. This step narrows the region of presence/absence of glasses, beard, mustaches), and the interest of the image down to the facial region¸ eliminating acquisition conditions (e.g. illumination and image unnecessary information for faster processing. resolution). Analyzing this region (facial) helps in locating the predominant seven regions of interest (ROIs), viz. two Artificial Neural Networks and Fuzzy Sets eyebrows, two eyes, nose, mouth and chin. Each of these We observed that other applications used an regions is then filtered to obtain the desired facial features. ANN [13] with one hidden layer [1] to implement Following up this step, to spatially sample the contour of a recognition of gestures for a single facial feature (lips) certain permanent facial feature, one or more facial-feature alone. Hence it is but obvious that one such ANN will be detectors are applied to the pertinent ROI. For example, necessary for each independent feature or Action Unit that the contours of the eyes are localized in the ROIs of the will be interpreted for emotion analysis. We intend to eyes by using a single detector representing an adapted utilize the concept of Neural Networks to assist the system version of a hierarchical-perception feature location to learn (in a semi-supervised manner) how to customize method [13]. We have performed feature extraction by a its recognition of expressions for different specialized method that extracts features based on Haar-like features, cases or environments. and classifies them using a tree-like decision structure. Fuzzy logic[22] (and fuzzy sets) is a form of multi- This is similar to the process described by Borsboom et. al valued logic derived from fuzzy set theory to deal with [7] . reasoning that is approximate rather than accurate. The contours of the facial features, generated by Pertaining to facial expressions, fuzzy logic comes into the facial feature detection method, are utilized for further picture when a certain degree of deviation is to be made analysis of shown facial expressions. Similar to the permissible from the expected set of values for a particular approach taken by Pantic and Rothkrantz [2], we carry out facial expression. feature points‘ extraction under the assumption that the face images are non-occluded and in frontal view. We extract 22 fiducial points, which constitute the action units, once grouped by the ROI they belong to. The last stage www.ijmer.com 1450 | Page
  • 3. International Journal of Modern Engineering Research (IJMER) www.ijmer.com Vol.2, Issue.4, July-Aug 2012 pp-1449-1453 ISSN: 2249-6645 The process of emotion detection can be further enhanced to train the system using semi-supervised learning. Since past data about a particular subject‘s emotive expression can be stored and be made available to the system later, it may serve to improve the machines ‗understanding‘ and be used as a base for inferring more about a person‘s emotional state from a more experienced perspective. One approach to supervised learning is on Fig. 3: The Two-Layer Neurofuzzy Architecture [13] inception of the system, for AU and gesture mapping. Here, the user will give feedback to the system with This is necessary because each person‘s facial respect to his/her images flashed on the screen in the form muscles contort in slightly different ways while showing of gesture name. This calibration process may be allowed similar emotions. Thus the deviated features may have approximately a minute or two to complete. In this way slightly different coordinates, yet still the algorithm should the user is effectively tutoring the system to recognize work irrespective of these minor differences. Fuzzy logic basic gestures so that it can build up on this knowledge in is used to increase the success rate of the process of a semi-supervised manner later on. This feedback will be recognition of facial expressions across different people. stored in a database along with related entries for the A neuro-fuzzy architecture incorporates both the parameters extracted from the region of interest (ROI) of techniques, merging known values and rules with variable the image displayed. The same feedback is also fed into conditions, as shown in Fig. 3. Chatterjee and Shi [18] have the neuro-fuzzy network structure which performs the also utilized the neuro-fuzzy concept to classify translation of the input parameters to obtain the eventual expressions, as well as to assign weights to facial features result. It is used to shift the baseline for judging the input depending on their importance to the emotion detection parameters and classifying them as part of the processing process. involved in the system. Now, for further input received, the neural network behaves in tandem with its new, POTENTIAL ISSUES personalized rule set. The user can also request for training examples to Geographical Versatility be gathered at discrete time intervals and provide a label One of the fundamental problems faced by any for each. This can be combined with the displacements software that accepts images of the user as an input is that output by the feature extraction phase and added as a new it needs to account for the geographical variations bound example to the training set. Another way to increase the to occur in it. Though people coming from one personalized information is to maintain an active history of geographical area share certain physiological similarities, the user, similar to a session. This data is used further on they tend to be distinct from those who belong to another to implement a context-based understanding of the user, region. However, these physical differences will create a and predict the likelihoods of emotions that may be felt in lot of inconsistencies in the input obtained. the near future. To tackle this issue, the software must employ a Calibration and Personalization would further technique that helps increase the scale of deviations increase the recognition performance and allow a tradeoff occurring, so that divergence to a certain degree can be between generality and precision to be made. By applying rounded off and the corresponding feature recognized. these methods, the system in focus will be able to Support Vector Machines [10] help the software do this. recognize the user‘s facial gestures with very high Fundamentally, a set of SVMs will be used to accuracy; this approach is highly suitable for developing a specify and incorporate rules for each facial feature and personalized, user-specific application. related possible gestures. The more the rules for a Dealing with Missing Values particular feature, the greater will be the degree of The uncontrolled conditions of the real world and accuracy for even the smallest of micro-expressions and the numerous methods that can be employed to obtain the for the greater resolution images. information for processing can cause extensive occlusions Originally designed for binary classification, and create a vast scope for missing values in the input. there are currently two types of approaches [10] for multi- Back in the history of linear PCA, the problem of missing class SVMs. The first approach is to construct and values has long been considered as an essential issue and combine several binary classifiers (also called ―one-per- investigated in depth. [24][25] Thus in this paper the problem class‖ method) while the other is to consider all data in of missing values is tackled by taking into consideration one optimization formulation named one-one way. Also, the facial symmetry of the subject. SVMs can be used to form slightly customized rules for Though the human faces aren‘t exactly drastically different geographical populations. Once the symmetrical for most expressions, we create perfectly nativity of the subject is identified, the appropriate multi- symmetrical faces using the left or right (depending on class SVM can be used for further analysis and processing. which one is clearer) side of the face. Let‘s call it the Thus, this method uses SVMs to take care of the reference image. The creation of the Reference image is differences and inconsistencies due to the environmental shown below using the left and right sides of the face variations. respectively. Personalization and Calibration www.ijmer.com 1451 | Page
  • 4. International Journal of Modern Engineering Research (IJMER) www.ijmer.com Vol.2, Issue.4, July-Aug 2012 pp-1449-1453 ISSN: 2249-6645 As is evident from Table 1, our system shows greater accuracy for Joy and Surprise with 73.3% and 80.0% accuracy rates respectively. The average accuracy of our system is 71.7%. These results can be further improved by personalizing the system as discussed in the previous section. A personalized system is tuned to its user‘s specific facial Fig 4: Left: original face, Centre: Symmetrical face actions, and hence, after a calibration period, can be created using the right side, Right: Symmetrical face expected to give an average accuracy of up to 90%. created using the left side. FUTURE SCOPE AND APPLICATIONS Now the original image is first scanned for the As human facial expression recognition is a very occluded or blank areas in the face, that constitute the set elementary process, it is useful to evaluate the mood or of missing values. After all the locations of blockages have emotional state of a subject under observation. As such, been identified, the Reference image is super-imposed on tremendous potential lies untapped in this domain. The this scanned image. During the superimposition the facial basic idea of a machine being able to comprehend the symmetry is therefore utilized to complete the input values human emotive state can be put to use in innumerable wherever necessary, and the missing values taken care of. scenarios, a few of which we have mentioned here. Overcoming Time Constraints The main challenge in analyzing the face, for the  The ability to detect and track a user‘s state of mind expression being portrayed, is the amount of processing has the potential to allow a computing system to offer time required. This becomes a bottleneck in real time relevant information when a user needs help – not just scenarios. One way of overcoming this time constraints is when the user requests help, for instance, the change dividing the image into various regions of interest and in the Room Ambience by judging the mood of the analyzing each in parallel independently. For instance, the person entering it. face can be divided into regions of the mouth; chin, eyes,  Help people in emotion-related research to improve eyebrows etc. and the parallel analysis of these can be the processing of emotion data. combined to get the final result. This approach will  Applications in surveillance and security. For guarantee a gain in the throughput of the system. instance, computer models obtained up to 71% correct classification of innocent or guilty participants based on the macro features extracted from the video camera footage.  In this regard, lie detection amongst criminal suspects during interrogation is also a useful aspect in which this system can form a base. It is proven that facial cues more often than not can give away a lie to the trained eye. Fig. 5: Delegation of Image Components to Parallel  Patient Monitoring in hospitals to judge the Processors effectiveness of prescribed drugs is one application to the Health Sector. In addition to this, diagnosis of RESULTS diseases that alter facial features and psychoanalysis In this section we present the resulting of patient mental state are further possibilities. effectiveness of our systems facial expression recognition  Clever Marketing is feasible using emotional capability as well as subsequent emotion classification. knowledge of a patron and can be done to suit what a The Confusion Matrix below shows that for a dataset of patron might be in need of based on his/her state of images depicting a particular emotion, how many were mind at any instant. identified correctly, and how many were confused with  Detecting symptoms such as drowsiness, fatigue, or other emotions. The dataset comprised images taken of an even inebriation can be done using this system. Thus, expert user, having knowledge of what a typical emotion is by helping in the process of Driver Monitoring, this expected to show as per universal norms described by system can play an integral role in reducing road Ekman [12]. Our dataset comprised 60 images of Indian mishaps to a great extent. faces, with 15 each depicting Joy, Sorrow, Surprise and Neutral emotive states. Also noteworthy of mention is the advantage of reading emotions through facial cues, as our system does, Total Accuracy: 71.7% over reading them through study of audio data of human Table 1: Person-dependent Confusion Matrix for training voices. Not only does the probability of noise affecting and test data supplied by an expert user and distorting the input reduce for our system, but also there are fewer ways to disguise or misread from visual Expression Joy Sorrow Neutral Surprise Overall information as opposed to audio data, as is also stated by Joy 11 1 1 2 73.3% Busso et. al [23] . Hence facial expressions would form a Sorrow 2 10 2 1 66.7% vital part of a multimodal system for emotion analysis as Neural 3 2 10 0 66.7% well. Surprise 2 0 1 12 80.0% www.ijmer.com 1452 | Page
  • 5. International Journal of Modern Engineering Research (IJMER) www.ijmer.com Vol.2, Issue.4, July-Aug 2012 pp-1449-1453 ISSN: 2249-6645 CONCLUSION [13] A. Raouzaiou et. Al, ―A Hybrid Intelligence System for As is quite evident after plenty of research and Facial Expression Recognition‖ EUNITE 2002 482-490. deliberation, gaining insight on what a person may be [14] Y. Liu et. al, ―Facial Asymmetry Quantification for Expression Invariant Human Identification‖ feeling is very valuable for many reasons. The future [15] Seyed Mehdi Lajevardi, Zahir M. Hussain ―A Novel scope of this field is visualized to be practically limitless, Gabor Filter Selection Based on Spectral Difference and with more futuristic applications visible on the horizon Minimum Error Rate for Facial Expression Recognition‖ especially in the field of Security. 2010 Digital Image Computing: Techniques and Our facial expression recognition system, Applications, 137-140. utilizing neuro-fuzzy architecture is 71.7% accurate, which [16] Ketki K. Patil et. Al, ―Facial Expression Recognition and is approximately the level of accuracy expected from a Head Tracking in Video Using Gabor Filtering‖, Third support vector machine approach as well. Every system International Conference on Emerging Trends in has its limitations. Although this particular implementation Engineering and Technology, 152-157. [17] Patrick Lucey, Simon Lucey, Jeffrey Cohn, ―Registration of facial expression recognition may perform less than Invariant Representations for Expression Detection‖ 2010 entirely accurate as per the end users‘ expectations, it is Digital Image Computing: Techniques and Applications, envisioned to contribute significantly to the field, upon 255-261. which similar work can be furthered and enhanced. Our [18] Suvam Chatterjee, Hao Shi, ―A Novel Neuro-Fuzzy aim in forming such a system is to form a standard Approach to Human Emotion Determination‖ 2010 protocol that may be used as a component in many of the Digital Image Computing: Techniques and Applications, applications that will no doubt require an emotion-based 282-287. HCI. [19] P. Li et. al, ―Automatic Recognition of Smiling and Empowering computers in this way has the Neutral Facial Expressions‖ 2010 Digital Image Computing: Techniques and Applications, 581-586. potential of changing the very way a machine ―thinks‖. It [20] Irfan A. Essa, ―Coding, Analysis, Interpretation and gives them the ability to understand humans as ‗feelers‘ Recognition of Facial Expressions‖ IEEE Transactions on rather than ‗thinkers‘. This in mind, this system can even Pattern Recognition and Machine Intelligence, July 1997, be implemented in the context of Artificial Intelligence. 757-763. As part of the relentless efforts of many to create [21] Roberto Valenti, Nicu Sebe, Theo Gevers, ―Facial intelligent machines, facial expressions and emotions have Expression Recognition: A Fully Integrated Approach‖ and always will play a vital role. ICIAPW '07 Proceedings of the 14th International Conference of Image Analysis and Processing, 2007, 125- REFERENCES 130. [1] Piotr Dalka, Andrzej Czyzewski, ―Human-Computer [22] L. A. Zadeh, ―Fuzzy Sets‖ Information and Control 8, Interface Based on Visual Lip Movement and Gesture 1965, 338-353. Recognition‖, International Journal of Computer Science [23] Carlos Busso et. al, ―Analysis of Emotion Recognition and Applications, ©Technomathematics Research using Facial Expressions, Speech and Multimodal Foundation Vol. 7 No. 3, pp. 124 - 139, 2010 Information‖, ACM 2004. [2] M. Pantic, L.J.M. Rothkrantz, ―Facial Gesture [24] N.F Troje et al, ―How is bilateral symmetry of human Recognition in face image sequences: A study on facial faces used for recognition of novel views‖, Vision gestures typical for speech articulation‖ Delft University Research 1998, 79-89 of Technology. [25] R Thornhill et al, ―Facial Attractiveness‖, Trans. In [3] Claude C. Chibelushi, Fabrice Bourel, ―Facial Expression Cognitive Sciences, December 1999. Recognition: A Brief Overview‖, 1-5 ©2002. [4] Aysegul Gunduz, Hamid Krim ―Facial Feature Extraction Using Topological Methods‖ © IEEE, 2003 [5] Kyungnam Kim, ―Face Recognition Using Principal Component Analysis‖ [6] Abdallah S. Abdallah, A. Lynn Abbott, and Mohamad Abou El-Nasr, ―A New Face Detection Technique using 2D DCT and Self Organizing Feature Map‖, World Academy of Science, Engineering and Technology 27 2007 [7] Sander Borsboom, Sanne Korzec, Nicholas Piel, ―Improvements on the Human Computer Interaction Software: Emotion‖, Methodology, 2007, 1-8. [8] A. Mehrabian, Communication without Words, Psychology Today 2 (4) (1968) 53-56. [9] M. Pantic, L.J.M. Rothkrantz, ―Expert System for Automatic for Automatic Analysis of Facial Expressions‖ Image and Vision Computing 18 (2000) 881-905. [10] Li Chaoyang, Liu Fang, Xie Yinxiang, ―Face Recognition using Self-Organizing Feature Maps and Support Vector Machines‖, Proceedings of the Fifth ICCIMA, 2003. [11] Aitor Azcarate, Felix Hageloh, Koen van de Sande, Roberto Valenti, ―Automatic Facial Emotion Recognition‖ Univerity of Amsterdam, 1-10, June 2005 [12] P. Ekman ―Strong evidence for universals in Facial Expressions‖ Psychol. Bull., 115(2): 268-287, 1994. www.ijmer.com 1453 | Page