SlideShare une entreprise Scribd logo
1  sur  7
Télécharger pour lire hors ligne
Effects of Illumination Changes on the Performance of
                       Geometrix Facevision@3D FRS
Eric P. Kukula                      Stephen 1. Elliott, PhD            Roman Waupotitsch                  Bastien Pesenti
kukula@ purdue.edu                  ellion@purdue.edu                  romanw@geometrix.com               bastienp@geometrix.com
Industrial Technology,              Industrial Technology,             Vice President of R&D              Research Engineer
Purdue University                   Purdue University                  Geometrix Inc                      Geometrix Inc
West Lafayette, IN 47906,           West Lafayene, IN 47906,           1590 The Alameda Ste 200           1590 The Alameda Ste 200
USA                                 USA                                San Jose. CA 95 124                San Jose, CA 95 124



                         ABSTRACT                                    environmental conditions, such as lighting, may be
                                                                     inconsistent,      consequently      affecting     the
     This evaluation examined the effects of four                    performance of the face recognition system. In
frontal light intensities on the performance of a 3D                 previous research by Kukula and Elliott [l], a
face recognition algorithm, specifically testing the                 commercially off the shelf software (COTS) 2D
significance between an unchanging enrollment                        facial recognition algorithm was assessed, which
illumination condition (220-225 lux) and four                        revealed that 2D face recognition still has
different illumination levels for verification. The                  significant challenges to overcome with regard to
evaluation also analyzed the significance of external                illumination, specifically when the ambient lighting
artifacts (i.e. glasses) and personal characteristics                is low, as well as when light was not held constant.
(i.e. facial hair) on the performance of the face                         Recently, three dimensional face recognition
recognition system (FRS).                                            algorithms have started to emerge in the
     Collected variables from the volunteer crew                     marketplace. According to the manufacturers, 3D
included       age,     gender,    ethnicity,   facial               face recognition has advantages over 2D face since
characteristics, hair covering the forehead, scars on                it compares the 3D shape of the face, which is
the face, and glasses.                                               invariant in different lighting conditions and pose,
     The analysis of data revealed that there are no                 although light conditions were only evaluated in this
statistically significant differences between                        evaluation.
environmental lighting and 3D FRS performance                             Over the past ten years there have been three
when a uniform or constant enrollment illumination                   large scale independent evaluations conducted on
level is used.                                                       2D COTS facial recognition systems which have
                                                                     shown that performance dramatically decreases
Keywords: biometrics, 3D face recognition,                           when environment lighting changes [ 2 4
environmental conditions, performance testing                        Currently, independent testing of 3D systems is
                                                                     sparse as it is an emerging biometric technology.
                       MOTIVATION                                    However, intemal testing conducted by Geometrix
                                                                     have reported equal error rates (EER) of less than
     As govemment and private corporations begin                     2% using image databases from University of
to implement biometric technologies in operational                    Southern California and the University of Notre
settings, such as in airports and facility access                    Dame. At the time of writing, no independent
control, the environment and application must be                     testing of COTS 3D face recognition has been
fully examined before implementation. With regard                    complete. However the NIST Face Recognition
to face recognition, there are several challenges to                 Grand Challenge (FRGC) is currently underway
face recognition systems, including illumination,                     with report set to be released in August of 2005.
which may affect the performance of the system.                           Further internal studies of the Geometrix Face
The implementation of biometric systems, including                    Vision system commissioned by the Defense
face recognition systems into legacy environments                     Advanced Research Projects Agency (DARPA)
that may not have ideal environmental conditions,                     concluded that as little as 6 gray values are
                                                                                                        4
indicate that this is an area of research that is                     sufficient for the Facevision system to perform
important as deployments of face recognition                         high-quality 3D reconstruction of faces. However,
systems become pervasive. As a result                                 until now no independent performance assessments




                                                                 331                                               02004 IEEE
                                                                                               0-7803-8506-3/02/$17.00




Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
using different lighting conditions have been
                                                                              S l o i - Zd’aflIhegrauird
      performed. The purpose of the evaluation reported                       3 Light sources . a s ’ /mm ground Io the Mitom 01 the enclosure
      here was designed to address exactly this aspect of
      3D recognition, namely to perform a system-level
      test of the Geometnx Facevision system.

                     CONCEPT OF THE SYSTEM

          The 3D face recognition system used in this
     evaluation was the Geometrix Human Identification
     System (HIS). The system’s fundamental algorithms
     were inspired by Chen and Medioni [ 5 ] .The sensor
     used was the Face Vision 200, which captures two
     images using two stereo calibrated cameras. The
     system then processes the images using proprietary
     and patented algorithms to construct a metrically
     accurate 3D model of the face.
          The 3D face model is then further processed to                   Figure 1: Testing Environment
     create a fully textured version of the face that may
     be used for visual inspection by an operator.
     Moreover, a 3D face template is extracted from the                    A light impermeable curtain segregated the testing
     model [6,7], which is 3 kilobytes for one-to-one                      environment from the educational computer lab. All
     verification and less than 200 bytes for one-to-many                  fluorescent lighting was removed from the testing
     identification. Verification time in this evaluation                  environment and the curtain impeded the
     averaged 12 seconds on a single processor, while                      uncontrolled fluorescent illumination from the
     intemal testing using dual processors averaged less                   educational lab area, resulting in a stable zero
     than 6 seconds.                                                       illuminance (lux) environment. The background
          The 3D face template encodes the salient                         used was very close to the recommended 18% gray
     features of the face with patented Active FusionTM                    [9-IO]. The extemal lighting used for verification
     algorithms, which allows a very accurate                              composed of three JTL Everlight continuous
     comparison between the “enrollment” face with the                     halogen lamps with 500 Watt USHIO halogen bulbs
     captured “verification” face. Robustness techniques                   covered by 24 inch softboxes. The lamps were
     are used to weigh different aspects of the face                       positioned in a manner that created an evenly
     according to their contribution to “being able to                     illuminated face. The Geometrix Facevision 200
     distinguish two faces” and their robustness to                        camera system, shown in included a lighting system
     changes in the facial shape over time and changes                     that remained constant throughout the evaluation
     due to facial expression, which were both outside                     (both enrollment and verification). The illumination
     the scope of this study.                                              of the experimental area was monitored with a NIST
                                                                           certified broad range ludfc light meter. The Face
                                 SETUP
                       EXPERIMENTAL                                        Vision 200 camera system included two off-the-
                                                                           shelf USB cameras. The cameras were attached to a
         This evaluation took place in the Biometric                       Dell Omniplex GX260 computer through an Orange
     Standards, Performance, and Assurance Laboratory                      Micro USB 2.0 PCI card. The computer was a
     in the School of Technology at Purdue University.                     single 2.0 GHz processor, 512 MB RAM, 40 GB
     The testing environment, shown in Figure 1, was                       hard drive. The operating system was Microsoft
     similar to that of Blackbum, Bone, and Phillips [8]                   Windows XP Pro SPl.
     and the setup described by Kukula and Elliott [1,8].




                                                                        332




Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
The COTS unit is currently optimized to capture
                                                                    faces between 18 inches and 30 inches for
                                                                    enrollment, and 16 inches to 36 inches for
                                                                    verification or identification.
                                                                        The sensor was originally calibrated by
                                                                    Geometrix. On-site color and sensitivity calibration
                                                                    was performed once in the Biometrics Standards,
                                                                    Performance, and Assurance Laboratory to optimize
                                                                    the sensor in the environment. It was subsequently
Figure 2: Geometrix Facevision 200 camera system                    inspected each day in accordance with the testing
                          Lighting                                  protocol.

    This evaluation tested the performance of a 3D                                             Software
face recognition algorithm using one enrollment
lighting intensity and four verification lighting                       Geometrix provided all software that was used
intensities. The enrollment lighting intensity used                 in this evaluation. The 3D model creator was
only the Geometrix system LED lights, which were                    Facevision 200 Series v5.1. The evaluation also
fastened to each side of the camera mount, which                    used the Geometrix Facevision Human
can be seen in Figure 2. The illumination defined for               Identification System (Facevision HIS) version
enrollment was 220 - 225 lux. These LED lights                      2.3. The system provides both an interface for
remained on throughout testing. Verification                        enrollment and verification or identification
occurred at 4 different light intensities as described
                                                                    operations, as well as administrative tools to
in Table 1.
                                                                    manage the database of enrolled persons
Table 1: Definition of lighting conditions                          (Figure 3). However for this evaluation only the
 Use            I Name                   I Light Intensity          enrollment and verification software was used.
                I Light Condition 1 I 220-225 lux
                                                                                              U
 Enrollment/
 Verification
                                                                                                             FACEVISION
 Verification     Light Condition 2 320-325 lux                                                              HIS GUI
 Verification     Light Condition 3 650-655 lux
 Verification     Light Condition 4 1020-1140 lux

                         Hardware

     The COTS Geometrix Facevision FV200
sensor was used (Figure 2) for image acquisition. It
is a passive stereo-based sensor incorporating hoard-
level cameras and custom lenses, which is
connected to a computer using a USB 2.0 interface.
The dimensions of the sensor are approximately
                                                                                                                    FACEVISION
6.5x4.3x2.5 inches. This sensor was used for both
                                                                                             FVZOO SENSOR          saL SERVER
enrollment and verification. The Facevision 200                                                              Copyright 0 GEOMETRIX
sensor incorporates an LED based lighting unit that
is attached on each side of the system. The lights are              Figure 3: Facevision HIS
dimmable. However, when set at the recommended                           The enrollment mode is designed to enroll new
intensity (220-225 lux), the LED light system                       persons, add additional biometric templates for
provides sufficient illumination for the sensor to                  existing persons, and access or edit demographic
operate in an optimal manner, even in the darkest                   information. The Facevision HIS software provides
environment. For the purpose of this evaluation, the                a seamless interface for operating the Facevision
protocol called for the system lights to remain at the              FV200 capture sensor. While the enrollment process
recommended level of 220-225 lux throughout the                     is fully automatic, a manual step may be performed
experiment.                                                         to verify the enrollment data. This step was




                                                                 333




Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
performed during each enrollment as part of                          of the image labeled light condition 2 was 1.631.625
       protocol, in order to verify model quality.                          or 2.608.
            The verification mode is designed to verify the
       claimed identity of the captured person with the 3D
       template stored in the database. ABer a few seconds,
       the system gives a binary answer, “Access Granted
       or “Access Denied.” The system also displays a
       confidence rating of the decision made, as well as a
       list of potential impostors known in the system.
       However, only the binary response was used for
       data analysis in this evaluation.

                   Captured Image Specifications

          To eliminate extemal effects on the experiment
      and to emphasize the sole effect of lighting on the
      performance of the system, the subject’s position,
      facial pose, and face covering artifacts were defined
      by the test protocol. Specifically, faces were
      captured with the nose approximately centered in
      the image. To simplify the process, each participant
      remained seated during the evaluation two feet from
                                                                                     Light Condition 3
      the ground. To compensate for the varying heights
      of participants, the camera was attached to a
      mechanical tripod that could be adjusted in height.                   Figure 5: Sample images from the 4 tested light
      The resulting captured image reflects the proposed                    intensities
      face recognition data format specification for
      captured images [7], which can be seen in Figure 4.
                                                                                        EVALUATION
                                                                                                CLASSIFICATION
      This document suggests the image should be
                                                                                The evaluation was defined as cooperative,
                                                                            overt, unhabituated, attended, and closed [ I 11. The
                                                                            experimental evaluation is classified as a modified
                                                                            technology evaluation. A traditional technology
                                                                            evaluation is conducted in a laboratory, by a
                                                                            universal sensor, and using the same data causing
                                                                            repeatability of samples. In this case however, data
                                                                            was collected and was evaluated on-line with the
                                                                            specific results and scores presented after the
                                                                            completion of the computation, hence its
                                                                            classification a s a modified technology evaluation.
                                                                                The purpose of the evaluation was to assess the
                                       t!A
                                                                            effects of four frontal light intensities. Failure to
      Figure 4: WCITS face recognition data format                          Enroll, Failure to Acquire, and a statistical analysis
      image requirement (Griffin, 2003)                                     of the differences in light and performance of the
                                                                            device were assessed.
      centered, meaning the mouth and middle of the nose
      should lie on the imaginary line AA (Figure 4). The                                         Volunteer Crew
      location of the eyes in the images should range
      between 50-70% the distance from the bottom of the                        This evaluation involved thirty subjects from
      image and the width of head ratio ( N C C ) should be                 the School of Technology at Purdue University.
      no less than 7/4 (1.75). Images collected in this                     Demographic information can be seen in Table 2.
      study fully conformed to the requirements proposed
      in [9], as seen in Figure 5. The width-to-head ratio




                                                                         334




Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
Table 2: Volunteer crew demographic information                     distance used for both enrollment and verification in
                                                                    this evaluation was 28 inches. To monitor the light,
                                                                    subjects were asked to hold a light meter sensor in
                                                                    front of their nose periodically throughout the
                                                                    evaluation to monitor the lighting conditions. These
                                                                    readings were recorded and checked to maintain
               Caucasian               24
                                                                    repeatability throughout the study.
               African
                                                                         The generalized testing protocol model can be
               American                 1
                                                                    seen in Figure 6. This evaluation was designed to
               Asian                   2
                                                                    compare the stored 3D face template created in the
               Hispanic                3
                                                                    enrollment lighting condition (220-225 lux) against
               Native
                                                                    verification attempts captured at the four different




          I    30-39
               40-49                   3



               no                      26




               Yes                     7
               no                      23


                                                                    Figure 6: Protocol Design
                  TESTING
                        PROTOCOL
                                                                    light intensities: 1) Enrollment lighting (220-225
                                                                    lux), 2) Light condition 2 (320-325 lux), 3) Light
     The protocol used for this evaluation called for               condition 3 (650-655 lux), and 4) Light condition 4
calibration of the cameras each day testing occurred.               (1020-1 140 lux). The protocol called for 3
At this time the operator also verified the                         verification attempts in each of the four light
experimental setup of all the equipment used for the
                                                                    intensities, for a total of 12 attempts for each
study. The testing protocol consisted of one                        subject.
enrollment light condition and four Verification light
conditions. The lighting conditions are defined in
                                                                                             Enrollment
Table 1. Before data collection began, participants
were informed of the testing procedures and given                       The first testing procedure was enrollment.
specific instructions, which included:
                                                                    After the subject was seated and the camera position
         Remove eyeglasses, hats, or caps
                                                                    was verified, the test operator notified the subject
         Refrain from chewing gum or candy                          the image capture sequence was beginning. During
         Look directly at the sensor (between the two               this sequence music could be heard. After the
         cameras) and maintain a neutral expression                 capture sequence was complete, a 2D image
         Stay as still as possible while the music is               appeared which was checked for quality (no facial
         playing.                                                   expressions, closed eyes, etc). The three
     At this time, the field of view of the camera was              dimensional model was then computed, checked for
checked to ensure captured images resembled                         correct nose position and quality, then stored. An
Figure 4. The distance between the camera and the                   example of a 3D model used in this study is shown
test subject's face was also measured to ensure the                 in Figure 7.
proper camera depth of field was achieved. The




                                                                 335




Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
mode when presented with the various light levels.
                                                                           A statistical analysis shows that at an alpha level of
                                                                           0.01, there was no statistically significant difference
                                                                           in the performance of the algorithm when the light
                                                                           level was measured between light level 1 (220-225
                                                                           lux), and the other levels (320-325 lux; 650-655 lux;
                                                                           1020-1140 lux).

                                                                                                   CONCLUSION

                                                                           Because the Geometrix face recognition engine uses
      Figure 7: Example of a 3D model                                      a template extracted from 3D, unlike the 2D image-
                               Verification                                based engines, this study shows that this 3-D
                                                                           algorithm seems to have overcome the usual
          Verification followed the same procedure for                     limitations of illumination variations. Unlike a
      each subject. The light conditions followed a                        previous study [I], this evaluation has shown that
      structured order and were not randomized. After                      there are no statistically significant differences in
      enrollment was complete, 3 verification attempts                     performance at any of the tested illumination levels.
      were conducted in the same lighting intensity used                   Further research is underway to evaluate lighting
      for enrollment (light condition l), followed by 3                    angles and pose, to establish the progress of 3-D
      attempts in light conditions 2, 3, and 4. Figure 8                   face recognition algorithms.
      shows the visual display given to the operator after
      each verification attempt. To ensure data collection                                         REFERENCES
      was accurate a screen shot of each attempt was
      collected, that used a barcode naming convention                             Kukula, E., & Elliott, S. (2003). Securing a
      that removed data collection errors of keying data,                          Restricted Site: Biometric Authentication at
      as well as reduced time between attempts.                                    Entry Point. Paper presented at the 37th
                                                                                   Annual     2003    Intemational    Camahan
                                                                                   Conference on Security Technology (ICCST),
                                                                                   (pp. 435439), Taipei ,Taiwan, ROC

                                                                                   Phillips, P., Rauss, P., & Der, S. (1996).
                                                                                   FERET        (Face Recognition Technology)
                                                                                   Recognition Algorithm Development and Test
                                                                                   Report (ARL-TR-995): U.S. Army Research
                                                                                   Laboratory.

                                                                                   Blackbum, D., Bone, J., & Phillips P. (2000).
                                                                                   Face Recognition Vendor Test (FRVT) 2000
                                    y_
                                                                                   Evaluation Report. DoD, DAFVA, NIJ
      Figure 8: Feedback from a verification attempt                               Phillips, P., Grother, P., Bone, M., Micheals,
                                                                                   R., Blackbum, D., & Tabassi, E. (2003). Face
                                RESULTS                                            Recognition Vendor Test 2002: DARF'A,
                                                                                   NIST, DOD, NAVSEA.
          The study consisted of 30 individuals, 30
      enrollment attempts, 30 impostor attempts and 360                            Chen G. & Medioni G., Building Human Face
      genuine verification attempts. At the enrollment                             Models from Two Images, Journal of VLSI
      stage there were no failure to enrolls (FTE = O%), or                        Signal Processing,     Kluwer Academic
      Failure to Acquires (FTA = 0%).                                              Publishers, vol. 27, no. U2, pp. 127-140,
          The hypotheses were set up to establish whether                          January 2001
      there was any significant difference in the
      performance of the algorithm in the verification




                                                                        336




Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
Waupotitsch R. & Medioni G. Robust
       Automated Face Modeling and Recognition
       Based on 3 0 Shape. Biometrics Symposium
       on Research, Crystal City, September 2003.

       Waupotitsch R. & Medioni G. Face Modeling
       and Recognition in 3 - 0 . AMFG 2003: 232-
       233.

       Bone, M. and D. Blackbum, Face Recognition
       at a Chokepoint: Scenario Evaluation Results.
       2002,    DoD Counterdrug         Technology
       Development Program Office: Dahlgren. p.
       58.

       Griffin, P. (2003). Face Recogntion Format
       for Data Interchange (M1/04-0041): INCITS
       M1.

       Rubenfeld, M., & Wilson, C. (1999). Gray
       Calibration of Digital Cameras To Meet NIST
       Mugshot Best Practice. NIST IR-6322

[ l l ] Mansfield, A.J. and J.L. Wayman, Best
        Practices  in Testing and Reporling
        Performances o Biometric Devices. 2002,
                      f
        Biometric Working Group. p. 32.




                                                                  337




Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.

Contenu connexe

Tendances

Iris segmentation analysis using integro differential operator and hough tran...
Iris segmentation analysis using integro differential operator and hough tran...Iris segmentation analysis using integro differential operator and hough tran...
Iris segmentation analysis using integro differential operator and hough tran...
Nadeer Abu Jraerr
 
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORIRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
cscpconf
 

Tendances (19)

(2003) An Evaluation of Fingerprint Quality across an Elderly Population vis-...
(2003) An Evaluation of Fingerprint Quality across an Elderly Population vis-...(2003) An Evaluation of Fingerprint Quality across an Elderly Population vis-...
(2003) An Evaluation of Fingerprint Quality across an Elderly Population vis-...
 
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...
 
A SURVEY ON IRIS RECOGNITION FOR AUTHENTICATION
A SURVEY ON IRIS RECOGNITION FOR AUTHENTICATIONA SURVEY ON IRIS RECOGNITION FOR AUTHENTICATION
A SURVEY ON IRIS RECOGNITION FOR AUTHENTICATION
 
Fake Multi Biometric Detection using Image Quality Assessment
Fake Multi Biometric Detection using Image Quality AssessmentFake Multi Biometric Detection using Image Quality Assessment
Fake Multi Biometric Detection using Image Quality Assessment
 
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
 
(2008) Investigating The Relationship Between Fingerprint Image Quality And S...
(2008) Investigating The Relationship Between Fingerprint Image Quality And S...(2008) Investigating The Relationship Between Fingerprint Image Quality And S...
(2008) Investigating The Relationship Between Fingerprint Image Quality And S...
 
23-02-03[1]
23-02-03[1]23-02-03[1]
23-02-03[1]
 
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
 
L_3011_62.+1908
L_3011_62.+1908L_3011_62.+1908
L_3011_62.+1908
 
Depth-Image-based Facial Analysis between Age Groups and Recognition of 3D Faces
Depth-Image-based Facial Analysis between Age Groups and Recognition of 3D FacesDepth-Image-based Facial Analysis between Age Groups and Recognition of 3D Faces
Depth-Image-based Facial Analysis between Age Groups and Recognition of 3D Faces
 
Icetet 2010 id 94 fkp segmentation
Icetet 2010   id 94 fkp segmentationIcetet 2010   id 94 fkp segmentation
Icetet 2010 id 94 fkp segmentation
 
Overview of Image Based Ear Biometric with Smartphone App
Overview of Image Based Ear Biometric with Smartphone AppOverview of Image Based Ear Biometric with Smartphone App
Overview of Image Based Ear Biometric with Smartphone App
 
Biometric Authentication Based on Hash Iris Features
Biometric Authentication Based on Hash Iris FeaturesBiometric Authentication Based on Hash Iris Features
Biometric Authentication Based on Hash Iris Features
 
Iris segmentation analysis using integro differential operator and hough tran...
Iris segmentation analysis using integro differential operator and hough tran...Iris segmentation analysis using integro differential operator and hough tran...
Iris segmentation analysis using integro differential operator and hough tran...
 
MULTIMODAL BIOMETRICS RECOGNITION FROM FACIAL VIDEO VIA DEEP LEARNING
MULTIMODAL BIOMETRICS RECOGNITION FROM FACIAL VIDEO VIA DEEP LEARNINGMULTIMODAL BIOMETRICS RECOGNITION FROM FACIAL VIDEO VIA DEEP LEARNING
MULTIMODAL BIOMETRICS RECOGNITION FROM FACIAL VIDEO VIA DEEP LEARNING
 
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORIRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
 
A Robust Approach in Iris Recognition for Person Authentication
A Robust Approach in Iris Recognition for Person AuthenticationA Robust Approach in Iris Recognition for Person Authentication
A Robust Approach in Iris Recognition for Person Authentication
 
N010226872
N010226872N010226872
N010226872
 
Mining of Images Based on Structural Features Correlation for Facial Annotation
Mining of Images Based on Structural Features Correlation for Facial AnnotationMining of Images Based on Structural Features Correlation for Facial Annotation
Mining of Images Based on Structural Features Correlation for Facial Annotation
 

Similaire à (2004) Effects of Illumination Changes on the Performance of Geometrix FaceVision 3D FRS

(2004) The challenges of the environment and the human/biometric device inter...
(2004) The challenges of the environment and the human/biometric device inter...(2004) The challenges of the environment and the human/biometric device inter...
(2004) The challenges of the environment and the human/biometric device inter...
International Center for Biometric Research
 
Techniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive ReviewTechniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive Review
IOSR Journals
 
Techniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive ReviewTechniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive Review
IOSR Journals
 

Similaire à (2004) Effects of Illumination Changes on the Performance of Geometrix FaceVision 3D FRS (20)

IRJET- IoT based Door Lock and Unlock System using Face Recognition
IRJET- IoT based Door Lock and Unlock System using Face RecognitionIRJET- IoT based Door Lock and Unlock System using Face Recognition
IRJET- IoT based Door Lock and Unlock System using Face Recognition
 
(2004) The challenges of the environment and the human/biometric device inter...
(2004) The challenges of the environment and the human/biometric device inter...(2004) The challenges of the environment and the human/biometric device inter...
(2004) The challenges of the environment and the human/biometric device inter...
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
 
IRJET- Face Spoofing Detection Based on Texture Analysis and Color Space Conv...
IRJET- Face Spoofing Detection Based on Texture Analysis and Color Space Conv...IRJET- Face Spoofing Detection Based on Texture Analysis and Color Space Conv...
IRJET- Face Spoofing Detection Based on Texture Analysis and Color Space Conv...
 
A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...
 
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET TransformRotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
 
Quality assessment of stereoscopic 3 d image compression by binocular integra...
Quality assessment of stereoscopic 3 d image compression by binocular integra...Quality assessment of stereoscopic 3 d image compression by binocular integra...
Quality assessment of stereoscopic 3 d image compression by binocular integra...
 
Use of Illumination Invariant Feature Descriptor for Face Recognition
 Use of Illumination Invariant Feature Descriptor for Face Recognition Use of Illumination Invariant Feature Descriptor for Face Recognition
Use of Illumination Invariant Feature Descriptor for Face Recognition
 
IRJET - Plant Leaf Disease Diagnosis from Color Imagery using Co-Occurrence M...
IRJET - Plant Leaf Disease Diagnosis from Color Imagery using Co-Occurrence M...IRJET - Plant Leaf Disease Diagnosis from Color Imagery using Co-Occurrence M...
IRJET - Plant Leaf Disease Diagnosis from Color Imagery using Co-Occurrence M...
 
IRJET- Plant Leaf Disease Diagnosis from Color Imagery using Co-Occurrence Ma...
IRJET- Plant Leaf Disease Diagnosis from Color Imagery using Co-Occurrence Ma...IRJET- Plant Leaf Disease Diagnosis from Color Imagery using Co-Occurrence Ma...
IRJET- Plant Leaf Disease Diagnosis from Color Imagery using Co-Occurrence Ma...
 
Face recognition technology
Face recognition technologyFace recognition technology
Face recognition technology
 
Realtime human face tracking and recognition system on uncontrolled environment
Realtime human face tracking and recognition system on  uncontrolled environmentRealtime human face tracking and recognition system on  uncontrolled environment
Realtime human face tracking and recognition system on uncontrolled environment
 
Ck36515520
Ck36515520Ck36515520
Ck36515520
 
Techniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive ReviewTechniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive Review
 
Techniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive ReviewTechniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive Review
 
Local Descriptor based Face Recognition System
Local Descriptor based Face Recognition SystemLocal Descriptor based Face Recognition System
Local Descriptor based Face Recognition System
 
Assessment and Improvement of Image Quality using Biometric Techniques for Fa...
Assessment and Improvement of Image Quality using Biometric Techniques for Fa...Assessment and Improvement of Image Quality using Biometric Techniques for Fa...
Assessment and Improvement of Image Quality using Biometric Techniques for Fa...
 
IRJET- A Review on Fake Biometry Detection
IRJET- A Review on Fake Biometry DetectionIRJET- A Review on Fake Biometry Detection
IRJET- A Review on Fake Biometry Detection
 
Virtual Contact Discovery using Facial Recognition
Virtual Contact Discovery using Facial RecognitionVirtual Contact Discovery using Facial Recognition
Virtual Contact Discovery using Facial Recognition
 
Application of Digital Image Correlation: A Review
Application of Digital Image Correlation: A ReviewApplication of Digital Image Correlation: A Review
Application of Digital Image Correlation: A Review
 

Plus de International Center for Biometric Research

Best Practices in Reporting Time Duration in Biometrics
Best Practices in Reporting Time Duration in BiometricsBest Practices in Reporting Time Duration in Biometrics
Best Practices in Reporting Time Duration in Biometrics
International Center for Biometric Research
 

Plus de International Center for Biometric Research (20)

HBSI Automation Using the Kinect
HBSI Automation Using the KinectHBSI Automation Using the Kinect
HBSI Automation Using the Kinect
 
IT 34500
IT 34500IT 34500
IT 34500
 
An Investigation into Biometric Signature Capture Device Performance and User...
An Investigation into Biometric Signature Capture Device Performance and User...An Investigation into Biometric Signature Capture Device Performance and User...
An Investigation into Biometric Signature Capture Device Performance and User...
 
Entropy of Fingerprints
Entropy of FingerprintsEntropy of Fingerprints
Entropy of Fingerprints
 
Biometric and usability
Biometric and usabilityBiometric and usability
Biometric and usability
 
Examining Intra-Visit Iris Stability - Visit 4
Examining Intra-Visit Iris Stability - Visit 4Examining Intra-Visit Iris Stability - Visit 4
Examining Intra-Visit Iris Stability - Visit 4
 
Examining Intra-Visit Iris Stability - Visit 6
Examining Intra-Visit Iris Stability - Visit 6Examining Intra-Visit Iris Stability - Visit 6
Examining Intra-Visit Iris Stability - Visit 6
 
Examining Intra-Visit Iris Stability - Visit 2
Examining Intra-Visit Iris Stability - Visit 2Examining Intra-Visit Iris Stability - Visit 2
Examining Intra-Visit Iris Stability - Visit 2
 
Examining Intra-Visit Iris Stability - Visit 1
Examining Intra-Visit Iris Stability - Visit 1Examining Intra-Visit Iris Stability - Visit 1
Examining Intra-Visit Iris Stability - Visit 1
 
Examining Intra-Visit Iris Stability - Visit 3
Examining Intra-Visit Iris Stability - Visit 3Examining Intra-Visit Iris Stability - Visit 3
Examining Intra-Visit Iris Stability - Visit 3
 
Best Practices in Reporting Time Duration in Biometrics
Best Practices in Reporting Time Duration in BiometricsBest Practices in Reporting Time Duration in Biometrics
Best Practices in Reporting Time Duration in Biometrics
 
Examining Intra-Visit Iris Stability - Visit 5
Examining Intra-Visit Iris Stability - Visit 5Examining Intra-Visit Iris Stability - Visit 5
Examining Intra-Visit Iris Stability - Visit 5
 
Standards and Academia
Standards and AcademiaStandards and Academia
Standards and Academia
 
Interoperability and the Stability Score Index
Interoperability and the Stability Score IndexInteroperability and the Stability Score Index
Interoperability and the Stability Score Index
 
Advances in testing and evaluation using Human-Biometric sensor interaction m...
Advances in testing and evaluation using Human-Biometric sensor interaction m...Advances in testing and evaluation using Human-Biometric sensor interaction m...
Advances in testing and evaluation using Human-Biometric sensor interaction m...
 
Cerias talk on testing and evaluation
Cerias talk on testing and evaluationCerias talk on testing and evaluation
Cerias talk on testing and evaluation
 
IT 54500 overview
IT 54500 overviewIT 54500 overview
IT 54500 overview
 
Ben thesis slideshow
Ben thesis slideshowBen thesis slideshow
Ben thesis slideshow
 
(2010) Fingerprint recognition performance evaluation for mobile ID applications
(2010) Fingerprint recognition performance evaluation for mobile ID applications(2010) Fingerprint recognition performance evaluation for mobile ID applications
(2010) Fingerprint recognition performance evaluation for mobile ID applications
 
ICBR Databases
ICBR DatabasesICBR Databases
ICBR Databases
 

Dernier

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Dernier (20)

Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 

(2004) Effects of Illumination Changes on the Performance of Geometrix FaceVision 3D FRS

  • 1. Effects of Illumination Changes on the Performance of Geometrix Facevision@3D FRS Eric P. Kukula Stephen 1. Elliott, PhD Roman Waupotitsch Bastien Pesenti kukula@ purdue.edu ellion@purdue.edu romanw@geometrix.com bastienp@geometrix.com Industrial Technology, Industrial Technology, Vice President of R&D Research Engineer Purdue University Purdue University Geometrix Inc Geometrix Inc West Lafayette, IN 47906, West Lafayene, IN 47906, 1590 The Alameda Ste 200 1590 The Alameda Ste 200 USA USA San Jose. CA 95 124 San Jose, CA 95 124 ABSTRACT environmental conditions, such as lighting, may be inconsistent, consequently affecting the This evaluation examined the effects of four performance of the face recognition system. In frontal light intensities on the performance of a 3D previous research by Kukula and Elliott [l], a face recognition algorithm, specifically testing the commercially off the shelf software (COTS) 2D significance between an unchanging enrollment facial recognition algorithm was assessed, which illumination condition (220-225 lux) and four revealed that 2D face recognition still has different illumination levels for verification. The significant challenges to overcome with regard to evaluation also analyzed the significance of external illumination, specifically when the ambient lighting artifacts (i.e. glasses) and personal characteristics is low, as well as when light was not held constant. (i.e. facial hair) on the performance of the face Recently, three dimensional face recognition recognition system (FRS). algorithms have started to emerge in the Collected variables from the volunteer crew marketplace. According to the manufacturers, 3D included age, gender, ethnicity, facial face recognition has advantages over 2D face since characteristics, hair covering the forehead, scars on it compares the 3D shape of the face, which is the face, and glasses. invariant in different lighting conditions and pose, The analysis of data revealed that there are no although light conditions were only evaluated in this statistically significant differences between evaluation. environmental lighting and 3D FRS performance Over the past ten years there have been three when a uniform or constant enrollment illumination large scale independent evaluations conducted on level is used. 2D COTS facial recognition systems which have shown that performance dramatically decreases Keywords: biometrics, 3D face recognition, when environment lighting changes [ 2 4 environmental conditions, performance testing Currently, independent testing of 3D systems is sparse as it is an emerging biometric technology. MOTIVATION However, intemal testing conducted by Geometrix have reported equal error rates (EER) of less than As govemment and private corporations begin 2% using image databases from University of to implement biometric technologies in operational Southern California and the University of Notre settings, such as in airports and facility access Dame. At the time of writing, no independent control, the environment and application must be testing of COTS 3D face recognition has been fully examined before implementation. With regard complete. However the NIST Face Recognition to face recognition, there are several challenges to Grand Challenge (FRGC) is currently underway face recognition systems, including illumination, with report set to be released in August of 2005. which may affect the performance of the system. Further internal studies of the Geometrix Face The implementation of biometric systems, including Vision system commissioned by the Defense face recognition systems into legacy environments Advanced Research Projects Agency (DARPA) that may not have ideal environmental conditions, concluded that as little as 6 gray values are 4 indicate that this is an area of research that is sufficient for the Facevision system to perform important as deployments of face recognition high-quality 3D reconstruction of faces. However, systems become pervasive. As a result until now no independent performance assessments 331 02004 IEEE 0-7803-8506-3/02/$17.00 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  • 2. using different lighting conditions have been S l o i - Zd’aflIhegrauird performed. The purpose of the evaluation reported 3 Light sources . a s ’ /mm ground Io the Mitom 01 the enclosure here was designed to address exactly this aspect of 3D recognition, namely to perform a system-level test of the Geometnx Facevision system. CONCEPT OF THE SYSTEM The 3D face recognition system used in this evaluation was the Geometrix Human Identification System (HIS). The system’s fundamental algorithms were inspired by Chen and Medioni [ 5 ] .The sensor used was the Face Vision 200, which captures two images using two stereo calibrated cameras. The system then processes the images using proprietary and patented algorithms to construct a metrically accurate 3D model of the face. The 3D face model is then further processed to Figure 1: Testing Environment create a fully textured version of the face that may be used for visual inspection by an operator. Moreover, a 3D face template is extracted from the A light impermeable curtain segregated the testing model [6,7], which is 3 kilobytes for one-to-one environment from the educational computer lab. All verification and less than 200 bytes for one-to-many fluorescent lighting was removed from the testing identification. Verification time in this evaluation environment and the curtain impeded the averaged 12 seconds on a single processor, while uncontrolled fluorescent illumination from the intemal testing using dual processors averaged less educational lab area, resulting in a stable zero than 6 seconds. illuminance (lux) environment. The background The 3D face template encodes the salient used was very close to the recommended 18% gray features of the face with patented Active FusionTM [9-IO]. The extemal lighting used for verification algorithms, which allows a very accurate composed of three JTL Everlight continuous comparison between the “enrollment” face with the halogen lamps with 500 Watt USHIO halogen bulbs captured “verification” face. Robustness techniques covered by 24 inch softboxes. The lamps were are used to weigh different aspects of the face positioned in a manner that created an evenly according to their contribution to “being able to illuminated face. The Geometrix Facevision 200 distinguish two faces” and their robustness to camera system, shown in included a lighting system changes in the facial shape over time and changes that remained constant throughout the evaluation due to facial expression, which were both outside (both enrollment and verification). The illumination the scope of this study. of the experimental area was monitored with a NIST certified broad range ludfc light meter. The Face SETUP EXPERIMENTAL Vision 200 camera system included two off-the- shelf USB cameras. The cameras were attached to a This evaluation took place in the Biometric Dell Omniplex GX260 computer through an Orange Standards, Performance, and Assurance Laboratory Micro USB 2.0 PCI card. The computer was a in the School of Technology at Purdue University. single 2.0 GHz processor, 512 MB RAM, 40 GB The testing environment, shown in Figure 1, was hard drive. The operating system was Microsoft similar to that of Blackbum, Bone, and Phillips [8] Windows XP Pro SPl. and the setup described by Kukula and Elliott [1,8]. 332 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  • 3. The COTS unit is currently optimized to capture faces between 18 inches and 30 inches for enrollment, and 16 inches to 36 inches for verification or identification. The sensor was originally calibrated by Geometrix. On-site color and sensitivity calibration was performed once in the Biometrics Standards, Performance, and Assurance Laboratory to optimize the sensor in the environment. It was subsequently Figure 2: Geometrix Facevision 200 camera system inspected each day in accordance with the testing Lighting protocol. This evaluation tested the performance of a 3D Software face recognition algorithm using one enrollment lighting intensity and four verification lighting Geometrix provided all software that was used intensities. The enrollment lighting intensity used in this evaluation. The 3D model creator was only the Geometrix system LED lights, which were Facevision 200 Series v5.1. The evaluation also fastened to each side of the camera mount, which used the Geometrix Facevision Human can be seen in Figure 2. The illumination defined for Identification System (Facevision HIS) version enrollment was 220 - 225 lux. These LED lights 2.3. The system provides both an interface for remained on throughout testing. Verification enrollment and verification or identification occurred at 4 different light intensities as described operations, as well as administrative tools to in Table 1. manage the database of enrolled persons Table 1: Definition of lighting conditions (Figure 3). However for this evaluation only the Use I Name I Light Intensity enrollment and verification software was used. I Light Condition 1 I 220-225 lux U Enrollment/ Verification FACEVISION Verification Light Condition 2 320-325 lux HIS GUI Verification Light Condition 3 650-655 lux Verification Light Condition 4 1020-1140 lux Hardware The COTS Geometrix Facevision FV200 sensor was used (Figure 2) for image acquisition. It is a passive stereo-based sensor incorporating hoard- level cameras and custom lenses, which is connected to a computer using a USB 2.0 interface. The dimensions of the sensor are approximately FACEVISION 6.5x4.3x2.5 inches. This sensor was used for both FVZOO SENSOR saL SERVER enrollment and verification. The Facevision 200 Copyright 0 GEOMETRIX sensor incorporates an LED based lighting unit that is attached on each side of the system. The lights are Figure 3: Facevision HIS dimmable. However, when set at the recommended The enrollment mode is designed to enroll new intensity (220-225 lux), the LED light system persons, add additional biometric templates for provides sufficient illumination for the sensor to existing persons, and access or edit demographic operate in an optimal manner, even in the darkest information. The Facevision HIS software provides environment. For the purpose of this evaluation, the a seamless interface for operating the Facevision protocol called for the system lights to remain at the FV200 capture sensor. While the enrollment process recommended level of 220-225 lux throughout the is fully automatic, a manual step may be performed experiment. to verify the enrollment data. This step was 333 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  • 4. performed during each enrollment as part of of the image labeled light condition 2 was 1.631.625 protocol, in order to verify model quality. or 2.608. The verification mode is designed to verify the claimed identity of the captured person with the 3D template stored in the database. ABer a few seconds, the system gives a binary answer, “Access Granted or “Access Denied.” The system also displays a confidence rating of the decision made, as well as a list of potential impostors known in the system. However, only the binary response was used for data analysis in this evaluation. Captured Image Specifications To eliminate extemal effects on the experiment and to emphasize the sole effect of lighting on the performance of the system, the subject’s position, facial pose, and face covering artifacts were defined by the test protocol. Specifically, faces were captured with the nose approximately centered in the image. To simplify the process, each participant remained seated during the evaluation two feet from Light Condition 3 the ground. To compensate for the varying heights of participants, the camera was attached to a mechanical tripod that could be adjusted in height. Figure 5: Sample images from the 4 tested light The resulting captured image reflects the proposed intensities face recognition data format specification for captured images [7], which can be seen in Figure 4. EVALUATION CLASSIFICATION This document suggests the image should be The evaluation was defined as cooperative, overt, unhabituated, attended, and closed [ I 11. The experimental evaluation is classified as a modified technology evaluation. A traditional technology evaluation is conducted in a laboratory, by a universal sensor, and using the same data causing repeatability of samples. In this case however, data was collected and was evaluated on-line with the specific results and scores presented after the completion of the computation, hence its classification a s a modified technology evaluation. The purpose of the evaluation was to assess the t!A effects of four frontal light intensities. Failure to Figure 4: WCITS face recognition data format Enroll, Failure to Acquire, and a statistical analysis image requirement (Griffin, 2003) of the differences in light and performance of the device were assessed. centered, meaning the mouth and middle of the nose should lie on the imaginary line AA (Figure 4). The Volunteer Crew location of the eyes in the images should range between 50-70% the distance from the bottom of the This evaluation involved thirty subjects from image and the width of head ratio ( N C C ) should be the School of Technology at Purdue University. no less than 7/4 (1.75). Images collected in this Demographic information can be seen in Table 2. study fully conformed to the requirements proposed in [9], as seen in Figure 5. The width-to-head ratio 334 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  • 5. Table 2: Volunteer crew demographic information distance used for both enrollment and verification in this evaluation was 28 inches. To monitor the light, subjects were asked to hold a light meter sensor in front of their nose periodically throughout the evaluation to monitor the lighting conditions. These readings were recorded and checked to maintain Caucasian 24 repeatability throughout the study. African The generalized testing protocol model can be American 1 seen in Figure 6. This evaluation was designed to Asian 2 compare the stored 3D face template created in the Hispanic 3 enrollment lighting condition (220-225 lux) against Native verification attempts captured at the four different I 30-39 40-49 3 no 26 Yes 7 no 23 Figure 6: Protocol Design TESTING PROTOCOL light intensities: 1) Enrollment lighting (220-225 lux), 2) Light condition 2 (320-325 lux), 3) Light The protocol used for this evaluation called for condition 3 (650-655 lux), and 4) Light condition 4 calibration of the cameras each day testing occurred. (1020-1 140 lux). The protocol called for 3 At this time the operator also verified the verification attempts in each of the four light experimental setup of all the equipment used for the intensities, for a total of 12 attempts for each study. The testing protocol consisted of one subject. enrollment light condition and four Verification light conditions. The lighting conditions are defined in Enrollment Table 1. Before data collection began, participants were informed of the testing procedures and given The first testing procedure was enrollment. specific instructions, which included: After the subject was seated and the camera position Remove eyeglasses, hats, or caps was verified, the test operator notified the subject Refrain from chewing gum or candy the image capture sequence was beginning. During Look directly at the sensor (between the two this sequence music could be heard. After the cameras) and maintain a neutral expression capture sequence was complete, a 2D image Stay as still as possible while the music is appeared which was checked for quality (no facial playing. expressions, closed eyes, etc). The three At this time, the field of view of the camera was dimensional model was then computed, checked for checked to ensure captured images resembled correct nose position and quality, then stored. An Figure 4. The distance between the camera and the example of a 3D model used in this study is shown test subject's face was also measured to ensure the in Figure 7. proper camera depth of field was achieved. The 335 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  • 6. mode when presented with the various light levels. A statistical analysis shows that at an alpha level of 0.01, there was no statistically significant difference in the performance of the algorithm when the light level was measured between light level 1 (220-225 lux), and the other levels (320-325 lux; 650-655 lux; 1020-1140 lux). CONCLUSION Because the Geometrix face recognition engine uses Figure 7: Example of a 3D model a template extracted from 3D, unlike the 2D image- Verification based engines, this study shows that this 3-D algorithm seems to have overcome the usual Verification followed the same procedure for limitations of illumination variations. Unlike a each subject. The light conditions followed a previous study [I], this evaluation has shown that structured order and were not randomized. After there are no statistically significant differences in enrollment was complete, 3 verification attempts performance at any of the tested illumination levels. were conducted in the same lighting intensity used Further research is underway to evaluate lighting for enrollment (light condition l), followed by 3 angles and pose, to establish the progress of 3-D attempts in light conditions 2, 3, and 4. Figure 8 face recognition algorithms. shows the visual display given to the operator after each verification attempt. To ensure data collection REFERENCES was accurate a screen shot of each attempt was collected, that used a barcode naming convention Kukula, E., & Elliott, S. (2003). Securing a that removed data collection errors of keying data, Restricted Site: Biometric Authentication at as well as reduced time between attempts. Entry Point. Paper presented at the 37th Annual 2003 Intemational Camahan Conference on Security Technology (ICCST), (pp. 435439), Taipei ,Taiwan, ROC Phillips, P., Rauss, P., & Der, S. (1996). FERET (Face Recognition Technology) Recognition Algorithm Development and Test Report (ARL-TR-995): U.S. Army Research Laboratory. Blackbum, D., Bone, J., & Phillips P. (2000). Face Recognition Vendor Test (FRVT) 2000 y_ Evaluation Report. DoD, DAFVA, NIJ Figure 8: Feedback from a verification attempt Phillips, P., Grother, P., Bone, M., Micheals, R., Blackbum, D., & Tabassi, E. (2003). Face RESULTS Recognition Vendor Test 2002: DARF'A, NIST, DOD, NAVSEA. The study consisted of 30 individuals, 30 enrollment attempts, 30 impostor attempts and 360 Chen G. & Medioni G., Building Human Face genuine verification attempts. At the enrollment Models from Two Images, Journal of VLSI stage there were no failure to enrolls (FTE = O%), or Signal Processing, Kluwer Academic Failure to Acquires (FTA = 0%). Publishers, vol. 27, no. U2, pp. 127-140, The hypotheses were set up to establish whether January 2001 there was any significant difference in the performance of the algorithm in the verification 336 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  • 7. Waupotitsch R. & Medioni G. Robust Automated Face Modeling and Recognition Based on 3 0 Shape. Biometrics Symposium on Research, Crystal City, September 2003. Waupotitsch R. & Medioni G. Face Modeling and Recognition in 3 - 0 . AMFG 2003: 232- 233. Bone, M. and D. Blackbum, Face Recognition at a Chokepoint: Scenario Evaluation Results. 2002, DoD Counterdrug Technology Development Program Office: Dahlgren. p. 58. Griffin, P. (2003). Face Recogntion Format for Data Interchange (M1/04-0041): INCITS M1. Rubenfeld, M., & Wilson, C. (1999). Gray Calibration of Digital Cameras To Meet NIST Mugshot Best Practice. NIST IR-6322 [ l l ] Mansfield, A.J. and J.L. Wayman, Best Practices in Testing and Reporling Performances o Biometric Devices. 2002, f Biometric Working Group. p. 32. 337 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.