SlideShare une entreprise Scribd logo
1  sur  33
Télécharger pour lire hors ligne
Attendance Using Facial Recognition
Attendance Using Facial Recognition Page 2
1.1 Abstract
Face recognition is a biometric identification process which demand as well as performance increase
rapidly over each and every year, and such systems are mainly used for security and commercial
purpose. an automatic system for face recognition in a real time background for a school to mark the
group action of their staff. The task is extremely difficult because the real time background subtraction
in a picture remains a challenge [1]. To discover real time, face square measure used and a simple quick
Principal Element Analysis the faces detected with a high accuracy rate. The matched face is employed
to mark group action of the worker. Manual coming into of group action in logbooks becomes a
troublesome task and it also wastes the time. thus we develop module that includes of face recognition
to manage the group action records of staff [2]. This enrolling could be a quondam method and their
face can be hold on within the info. throughout enrolling of face we tend to need a system since it's a
quondam process. you'll have your own roll variety as your worker id which can be distinctive for each
worker. The presence of every worker are updated in an exceedingly info. The results showed improved
performance over manual group action management system. group action is marked once worker
identification. This product provides way more solutions with correct results in user interface guide
and leave management systems.
In this paper first people investigation and recognition in video. The video suppurated format for this
project is MP4, AVI and wmv files. It first extracts frame from then video then extracts all the faces
from each frame, puts the faces in a directory folder known as database. Once you choose your input
video, it extracts all the faces in directory through extracts local binary pattern (LBP) options and count
number of faces in video through binary tree classifier. Finally, it shown range of persons present
within the scene. After that it matches the faces present in the test folder and then identity the person
name and address. The database contains 75 student information’s which is manually created and can
be increase and decrease depend on student take admission in institute.
KEYWORDS
Attendance using facial Recognition GUI program, facial Recognition, Classification algorithm,
binary tree classifier, extracts LBP features.
Attendance Using Facial Recognition Page 3
1.2 INTRODUCTION
Any organization like school, college, industry, or business and so on. The attendance plays an
important role in such kind of organisations and due to time consuming process also it requires
manpower [3] and finical resources. So we have to think about a scenario where each student in an
classroom call one by one for recording their presence and absence in school register sheet. This issues
is solved by an automated face recognition system. There are some automatic attendances creating
system that are presently utilized by abundant establishment. One of such system is biometric
technique. Although it's automatic and a step earlier than ancient methodology it fails to satisfy the
time constraint. This project introduces attending marking system, devoid of any kind of interference
with the normal teaching procedure. The system is often conjointly enforced throughout examination
sessions or in alternative teaching activities wherever attending is extremely essential. This system
eliminates classical student identification like career name of the scholar, or checking individual
identification cards of the scholar, which might not solely interfere with the continuing teaching
process, but also can be stressful for students during examination sessions.
The attendance is most important in student point of view because if the student fail to attend classes
it may not allow to sit in the examination which will more important. Regarding teacher point of view,
it takes time to maintain attendance record which will also effect on teaching time. But make record of
attendance it is compulsory and mandatory in each and every organisation. There are many techniques
developed day by day for marking record of attendance like, biometric eye recognition, fingerprint
recolonization, and most important face recognition. Face Recognition is related to image processing
which will performs very well in low light condition. Our old system which include manual process
are facing some serious issues like, manipulation, loss of records, more time require, fake attendance.
The use of pen and paper also cause to damage in environment. The automated face recognition system
is more accurate and secure. Now a day people are moving towards paperless work and more like to
digitalised.
Attendance Using Facial Recognition Page 4
Figure 1.1: Flow chart.
LBP
START
INPUT
VIDEO
DETECT
FACE
LBP
COUNT NO OF PERSON
USING TREE CLASIFIER
If else condition
apply
STOP
NO
YES
Attendance Using Facial Recognition Page 5
On this paper, local binary pattern and decision tree classifier algorithm is used to implement the
proposed work. Below are the steps involved in the project.
Step #1: Read the video in MP4, MOV,
Step #2: Detect face in each frame
Step #3: Store face with it LBP features.
Step #4: Count the total number of faces using tree classifier.
Step #5: If yes, then Show the detail of person with database.
Step #6: If no, then message is prompt. Person is not valid
The basic application of face recognition is person identification or verification system is used in
school, hospital, corporate and at the customs authorities on an airport. A face recognition system could
replace by the current unreliable and outdated system identification methods. There are several
methods which already used for attendance like entering the pin code into attendance system, using
password for the attendance system, using id card for the attendance system. The disadvantage of
methods like these is that they rely on the cooperation of the participants, whereas a person
identification system based on the analysis of (frontal) images of the face can be effective without the
participant’s cooperation or knowledge. Despite of the actual fact that at this moment already varied
of business face recognition systems are in use, this way of identification continues to be an interesting
topic for researchers. This is thanks to the actual fact that this system performs well below
comparatively easy and controlled environments, but perform much worse when variations in different
factors are present, such as pose, viewpoint, facial expressions, time when the pictures are made.
Attendance Using Facial Recognition Page 6
1.3 LITERATURE REVIEW
The basic process of person identification by using face recognition can be into four main sections as
shown in figure 1.1. These are detection and normalization, feature extraction such as Histograms of
Oriented Gradients (HOG), Scale Invariant Feature Transform (SIFT), Speed-up robust features
(SURF) and Local binary pattern (LBP) and classification using decision tree classifier. In the face
detection and normalization part, the video frame image is scaled and rotated till and cropped the faces
from the video frames. The figure 1.2 is the Shows the basic operation of face detection using local
binary pattern in which the first step is to detect the face from input data. The input data can be an
image or video. After the input data the face is detected from input data and is normalized according
to user requirement. The feature is extracted in this stage from detected face. The feature extraction
algorithm such as HOG, SURF or LBP etc which is popularly used. The classification stage is the final
stage in which the image facture is matched with the database stored feature. If the feature is matched
with database then it mark as present if it is not matched with the database then it is mark as absent.
Figure 1.2: The original local binary pattern (LBP) operator
FACE
DETECTION
AND
NORMALIZATI
ON
CLASSIFICATION
DATABASE
FEATURE
EXTRATION
Attendance Using Facial Recognition Page 7
RESEARCH PAPER METHOD USED IN PAPER ACCURACY
OF RESULT
FALSE
DETECTION
Yang et al (2002 pp.36-37) Knowledge-Based-Method 83.33% 28
Ryu et al. (2006) [22] Image-Based Method 89.1% 32
Feraud et al. (2001) neural network-based 86.0% 8
Rowley et al. (1998) [23] Neural Network-Based 86.2% 23
Wang et al. (2016) CNN-Based 98.1%
Hjelmås and Low, (2001, p.240) [24] Edge Detection-Based 76% 30
Viola and Jones (2001). [25] Viola-Jones 88.84% 103
Wang et al, (2015, p.318) [26] PCA with SVM) 89% 110
Thai et al. (2011) [27] Canny , PCA, ANN 85.7% N/A
TABLE 1.1: face detection paper with different methods.
Attendance Using Facial Recognition Page 8
1.3.1 Gabor filters
𝑓(𝑟, 𝑡, 𝛽, 𝛿, 𝛾𝑟, 𝛾𝑡) =
1
2𝜋𝛾𝑟,𝛾𝑡
exp⁡[
−1
2
((
𝑟
𝛾𝑟
)2
+ (
𝑡
𝛾𝑡
)2
) + 𝑗𝛽(𝑟 cos 𝜃 +⁡t⁡sin 𝜃)]--------------------------(1)
Where as
𝛾⁡𝐼𝑠 the spatial spread
𝛽 Is the frequency
𝛿 Is the orientation
Gabor filters has been found to be particularly appropriate for image texture representation and
discrimination. From theoretic view point, given by Okajima [8]. derived Gabor functions as solutions
for a certain mutual information maximization problem. It shows that the Gabor receptive field can
extract the maximum information from local image regions. Researchers have also shown that Gabor
features, when appropriately designed, are invariant against translation, rotation, and scale [12]. Gabor
filter could be a linear filter used for edge detection. In spatial domain shown in paper [17], a 2D Gabor
filter is a Gaussian kernel function modulated by a sinusoidal plane wave. The filter incorporates a real
associate degreed an unreal part representing orthogonal directions. The two components may be
shaped into a fancy range or used individually
Real
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑥(𝑟, 𝑡; 𝜕, 𝛼, ∄, 𝛿, 𝜑) = exp⁡(−
𝑟2+𝑡2
2𝛿2
) cos(2𝜋
𝑟2
𝜕
+ ∄)---------------------------------(2)
Imaginary
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑥(𝑟, 𝑡; 𝜕, 𝛼, ∄, 𝛿, 𝜑) = exp⁡(−
𝑟2+𝑡2
2𝛿2
) sin(2𝜋
𝑟2
𝜕
+ ∄)---------------------------------(3)
Attendance Using Facial Recognition Page 9
where
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑟,
= 𝑟 cos ∅ + 𝑟⁡ sin ∅---------------------------------------------------------------------(4)
Gabor filter of the face image is that the result of video frame 𝑉(𝑟, 𝑡) convolution with the bank of
Gabor filters⁡𝑓𝑢,𝑣(𝑟, 𝑡)⁡. The convolution result is complex worth which might be rotten to real and
imaginary part:
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐹𝑢,𝑣(𝑟, 𝑡) = 𝑉(𝑟, 𝑡) ∗ 𝑓𝑢,𝑣(𝑟, 𝑡)-----------------------------------------------(5)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑃𝑢,𝑣(𝑟, 𝑡) = 𝑅𝑒𝐹𝑢,𝑣(𝑟, 𝑡)⁡------------------------------------------------------(6)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑄 𝑢,𝑣(𝑟, 𝑡) = 𝐼𝑚𝐹𝑢,𝑣(𝑟, 𝑡)⁡⁡⁡⁡⁡⁡⁡⁡------------------------------------------------(7)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑆⁡ 𝑢,𝑣(𝑟, 𝑡) = √ 𝑃𝑢,𝑣
2 (𝑟, 𝑡) + 𝑄 𝑢,𝑣
2 (𝑟, 𝑡) ------------------------------------------------(8)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡∅ 𝑢,𝑣(𝑟, 𝑡) = 𝑎𝑟𝑐 tan (
𝑄 𝑢,𝑣(𝑟,𝑡)
𝑃 𝑢,𝑣(𝑟,𝑡)
)⁡⁡⁡-----------------------------------------------------(9)
Attendance Using Facial Recognition Page 10
1.3.2 Histograms of Oriented Gradients (HOG)
Histograms of Oriented Gradients are generally used in computer vision, pattern recognition and image
processing to detect and recognize visual objects (i.e. faces recognition). They are computed on a dense
grid of cells that overlap local contrast histogram normalizations of image gradient orientations to
improve the detector performance [5]. So that, this feature set performs very well for different form
primarily based object categories (i.e. face detection) because of the distribution of local intensity
gradients, even not any knowledge of the corresponding gradient [4]. To extract HOG descriptors, first
count the occurrences of edge orientations during a native neighborhood of a picture.
video is taken as an input. which is probably given by user. Gradient calculation is used and median
filter to perform filtering by value [1 0 1] [-1 0 -1], the image vertical gradient and horizontal gradient
can be calculated. The input video is converted into a sequence of image frame. This image is divided
into average tiny cell size of 256*256 pixels. Each cell is further divided into four small blocks and
each block size is considering as 2*2 pixels. Using histogram of oriented gradient bar graph is obtained.
The coordinate of the bar graph represents the 13 direction channels elite in step three. Normalization
is the process in which vector is represent and associated with pixels. local contrast is correcting by
block normalization and also normalized histograms of each block cells.
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐿 𝑟(𝑟, 𝑡) = 𝐾(𝑟 + 1, 𝑡) − 𝐾(𝑟 − 1, 𝑡) ------------------------------------------------(10)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐿 𝑟(𝑟, 𝑡) = 𝐾(𝑟, 𝑡 + 1) − 𝐾(𝑟, 𝑡 − 1)⁡⁡---------------------------------------------(11)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐿(𝑟, 𝑡) = √𝐿 𝑟
2(𝑟, 𝑡) + 𝐿 𝑡
2
(𝑟, 𝑡) ----------------------------------------------(12)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡∅(𝑟, 𝑡) = tan−1
(
𝐿 𝑡(𝑟,𝑡)
𝐿 𝑟(𝑟,𝑡)
) -------------------------------------------------------(13)
Attendance Using Facial Recognition Page 11
Histogram of Oriented Gradient is a algorithm that uses for local reference of coordinate of images,
and by calculating the local direction of gradient. At present, the approach HOG is applied to image
recognition and achieved a good success rate in human face detection.
The HOG feature is based on histogram of oriented gradient. It can not only describe the feature of
face contours, but also be not sensitive to light and small offset. Obtain the human options countenance|
facial expression face expression by combining the features of all blocks in line. Take the input image
of 256*256 as associate example shown in fig eleven shows the procedure of extracting depth image’s
HOG options, we calculate the HOG feature as follows:
1) video is taken as an input. which is probably given by user.
2) Gradient calculation is used and median filter to perform filtering by value [1 0 1] [-1 0 -1], the
image vertical gradient and horizontal gradient can be calculated.
3)The input video is converted into a sequence of image frame. This image is divided into average tiny
cell size of 256*256 pixels. Each cell is further divided into four small blocks and each block size is
considering as 2*2 pixels.
5)Using histogram of oriented gradient bar graph is obtained. The coordinate of the bar graph
represents the 13 direction channels elite in step three.
6) Normalization is the process in which vector is represent and associated with pixels. local contrast
is correcting by block normalization and also normalized histograms of each block cells.
Attendance Using Facial Recognition Page 12
1.3.3 Scale Invariant Feature Transform (SIFT)
Scale Invariant Feature Transform (SIFT)was first proposed by Lowe [6] becomes one of the research
interests for pattern recognition because of its excellent performance on object recognition. The SIFT
method first detects the local key points then stable for pictures in several resolutions and uses scale
and rotation to represent the key-points. SIFT features are quite similar with LBP features with local
histogram patterns representing on the whole face image. Although SIFT has excellent performance in
visual perception, whether it is a good descriptor for face images should be analyzed more. Because
object recognition requires only coarse features whereas face recognition wants rather more
discriminative features. An investigation of SIFT features on face representation has ever been done as
the first decide to analyze the SIFT approach in face analysis context [7].
𝐿(𝑟, 𝑡, Ԁ) = (𝑆(𝑟, 𝑡, 𝑘Ԁ)) − 𝑆(𝑟, 𝑡, Ԁ) ∗ 𝑃(𝑟, 𝑡)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡= 𝐽(𝑟, 𝑡, 𝑘Ԁ) − 𝐽(𝑟, 𝑡, Ԁ) -----------------------------------------(14)
Then local maxima and minima of 𝐿(𝑟, 𝑡, Ԁ) are computed based on comparing each sample point to
its eight neighbors in current image and nine neighbors in the scale above and below. At this scale, the
gradient magnitude 𝑔(𝑟, 𝑡) , and ∅(𝑟, 𝑡)⁡orientation, , is computed using pixel differences in Thereafter,
an orientation is determined by building a histogram of gradient orientations weighted by the gradient
magnitudes from the key-point’s neighborhood and it is assigned to each interest point combined with
the scale above and provides a scale and rotation invariant coordinate system for the descriptor
𝑔(𝑟, 𝑡) = √(𝐽(𝑟 + 1𝑡) − 𝐽(𝑟 − 1𝑡))2+(𝐽(𝑟, 𝑡 + 1) − 𝐽(𝑟, 𝑡 − 1))2 -------------------------------------(15)
∅(𝑟, 𝑡) = tan−1 ((𝐽(𝑟, 𝑡 + 1) − 𝐽(𝑟, 𝑡 − 1))
(𝐽(𝑟 + 1𝑡) − 𝐽(𝑟 − 1𝑡))⁄ ------------------------------------(16)
Attendance Using Facial Recognition Page 13
Scale Invariant Feature Transform Descriptor, proposed by David Lowe, permits the local matching
between different images by using the invariants Key points which are robust to scale and rotation. The
SIFT Descriptor’s calculation could be accomplished in four steps:
1) Detecting the potential Key points in the image by using the Gaussian of difference (GoD). The
Gaussian of difference is represented by Equation 17 which is shown below.
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐺𝑜𝐷(𝑙, 𝑚, ∅) = [𝐷(𝑙, 𝑚, 𝑘∅)] − [𝐷(𝑙, 𝑚, ∅)] ∗ 𝐼(𝑙, 𝑚) ---------------------------------(17)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐷( 𝑙, 𝑚, ∅) =
1
2𝜋∅2 𝑒
(𝑙2+𝑚2)
2∅2
----------------------------------------------------(18)
Whereas the parameter is used in Equation 18 are given below
D is representing for Gaussian kernel,
k is the scale factor,
𝐼(𝑙, 𝑚) is the source image.
2) The Key points that present a maximum or minimum are stable so we keep them. The other points
are instable and they’re rejected.
3) An orientation and magnitude is assigned to each key point.
4) Each key point is coded into a vector with a 128 dimensions which is invariant to scale, rotation and
illumination changes.
So basically when we apply the SIFT algorithm on the image, we detect a certain number of Key points
N that describes the image. On one hand the quantity of key points depends on the SIFT parameters
such as number of octaves, edge threshold and kernel Gaussian and so on. On the other hand on the
image type such as RGB, gray-scale, depth map and binary. All key points are gathered in a matrix
named SIFT matrix, in which the number of columns is set on 128, and the number of lines equals N.
After that the K-means algorithm transforms the SIFT matrix of RGB, Saliency map and LTB images
into vectors. These vectors are then concatenated in a single vector which will be used in the
classification.
Attendance Using Facial Recognition Page 14
1.3.4 Speed-up robust features (SURF)
SURF in full form written as speed up robust features may be a scale and in-plane rotation invariant
feature. The SURF [18] feature is invariant to rotation, scale, brightness. For the application of face
recognition, invariance with respect to rotation is often not necessary. Therefore, we have used the
upright version of the SURF descriptor. The Speed up robust features algorithm interest point detector
is find by computing integral image and then apply 2nd derivative (approximate) filters to image Non-
maximal suppression. To find local maxima in (x,y,s) space the Quadratic interpolation Interest point
descriptor is used. The window is divided into 4*4 matrix (16 sub windows Compute Haar wavelet
outputs Within each sub window, This yields a 64-element descriptor. For 9x9 filter, l0 = 3
and the length of positive or negative lobe in direction of derivative. To keep central pixel, must
increase l0 by minimum of 2 pixels increase filter dimension by 6 Therefore, sizes of filter will be in
this 9x9, 15x15, 21x21, 27x27.
Once interest point has been found
– Place window around point
– Divide into 4x4 sub windows
– In each sub window,
• measure at 25 (5x5) places: dx and dy
sum over all 25 places to get 4 values:
Attendance Using Facial Recognition Page 15
Figure 1.3: Speed-up robust features example
• First octave filter sizes: 9, 15, 21, 27
• Second octave sizes: 15, 27, 39, 51
– Increase by 12 each time (not 6)
– Spans from 21 (s=1.2*21/9=2.8)
to 45 (s=1.2*45/9=6)
(some overlap with first octave)
– Ok to measure at every other pixel in image (saves computation, like down sampling)
• Third octave sizes: 27, 51, 75, 99
– Increase by 24 each time
– Spans from
39 (s=1.2*39/9=5.2)
to 87 (s=1.2*87/9=11.6)
Attendance Using Facial Recognition Page 16
– Ok to measure at every
4th
pixel in image
Figure 1.4: Speed-up robust features example
Figure 1.5: SURF overview
2 4
9 15 21 27
15 27 39 51
27 51 75 99
1 8
Scale
Octaves
9 1
5
2
1
2
7
3
3
3
9
4
5
5
1
5
7
6
3
6
9
7
5
8
1
8
7
9
3
9
9
=1.2 * (12/9) = 1.6
=1.2 * (24/9) = 3.2
=1.2 * (21/9) = 2.8
=1.2 * (45/9) = 6.0
=1.2 * (39/9) = 5.2
=1.2 * (87/9) = 11.6
1.6 ≤ ≤
3.2
2.8 ≤ ≤
6.0
5.2 ≤ ≤
11.6
Attendance Using Facial Recognition Page 17
1.4 METHODOLOGY
Figure 1.6: The basic face detection methodology and classification.
The face detection is basically categories in two types. first one is feature base and second one is image
base. The feature base is subdivided in low level analysis and feature analysis. The low level are again
subdivided in two types skin color and edge detection. Feature analysis is subdivided into three types
LBP, viola jones and gabor feature. The image base is divided into two types neural network and
statistical approach. The statistical approach is again sub divided into DTC, PCA, and SVM. In this
paper we are demonstrating the combination of local binary pattern(LBP) and decision tree
classifier(DTC) which are explained below in the section 1.4.1 and 1.4.2.
FACE DETECTION
FEATURE BASE IMAGE BASE
FEATURE
ANALYSIS
LOW LEVEL
ANALYSIS
EDGE
DETECTION
SKIN COLOR
GABOR
FEATURE
VIOLA JONES
NEURAL
NETWORKS
PCA
SVM
DTCLBP
STASTISTICAL
APPROACH
Attendance Using Facial Recognition Page 18
1.4.1 Local Binary Patterns
Local binary patterns (LBP) were first introduced by Ojala et al [9] and it describe about scale
texture descriptor. The figure 1.7 shows the basic operation of the local binary pattern. First consider
a image having in the form of 3*3 matrix in which central pixel is consider as a reference to their
corresponding neighbour pixels.
In general setting, a LBP operator assigns a decimal number to a pair (𝑔, 𝑐𝑖)
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ 𝑅 =⁡∑ 2
𝑙−1
⁡𝐼(𝑔, 𝑐𝑖)𝑠
𝑙=1 ---------------------(19)
Whereas g represents the middle pixels, 𝑐 = (𝑐1 … … . . 𝑐 𝑛) corresponds to a collection of pixels
sampled from neighborhood of g.
𝐼( 𝑔, 𝑐𝑖) =⁡{
1⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑖𝑓⁡𝑔 < 𝑐𝑖
0⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
----------(20)
Wang has used both local binary pattern (LBP) with Histogram of Oriented Gradients (HOG)
descriptor to improve detection performance in his paper [10]. In the figure 1.7 the 4 is taken as central
pixel by using this central pixel value we can find its neighbourhood pixel values [11]. If the value of
central pixel is greater than its neighbourhood pixel then it is consider as one (1) otherwise if the central
pixel is smaller than its neighbourhood pixel then it is consider as zero (0).
Figure 1.7: local binary pattern (LBP) operator example.
6 1 8
2 4 7
5 9 3
1 0 1
0 1
1 1 0
Threshold
Attendance Using Facial Recognition Page 19
L=12 M=2.5
L=16 M=4
Let us consider the clockwise scenario. The neighbourhood pixel 6 in figure 1.7 is greater than
central pixel 4 so it is considering as one. Similarly, the neighbourhood pixel 1 is smaller than central
pixel 4 so it is considering as zero. Now again neighbourhood pixel 8 is greater than central pixel 4 so
it is considering as one and so on in a clockwise direction.
Figure 1.8: LBP Circular neighbor with different values of L, M.
In the figure 1.8 the black and blue are representing the values of L and M. The M is the radius of circle
which is consider as 1, 2.5 and 4 for three different scenarios. The L is the neighbourhood pixel which
is consider as 8, 12 and 16 for three different scenarios. The process of finding the value neighbourhood
pixel is same as explained in figure 1.7.
L=8 M=1
Attendance Using Facial Recognition Page 20
1.4.2 Decision Tree classifier
Decision Trees classifier (DTC) represented by a flowchart like tree structure was introduced by J.
R. Quinlan in 1986 [13]. As the name suggest decision tree algorithm have tree structure module and
used for pattern recognition and classification [14]. Breiman has introduced the Classification and
Regression Tree (CART) algorithm [15]. The decision trees are logic flow and mainly used in discrete
value classifier which is its main advantages but on the other hand it has also some disadvantages which
over sensitivity and irrelevant data with noise [16]. The decision tree is a learning algorithm as long as
we provide different input and train it they perform very well . The Gain ratio is calculated for the
training set attributes for all the features. It is a tree structure as shown in figure 1.9 example in which
it has three main parts. The first one is root the second one is subset and the last is leaf node. The root
work is to collect different data given by the user. Then the data is trained in the subset section. In
subset section they are trained in same attribute or may have different attribute. The leaf section is
created by repeating the section one and section two.
Information Entropy is defined as:
⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝛽(𝑇) = − ∑ 𝐺𝑗 log2 𝐺𝑗
𝑟
𝑗=0 ------------------------------------------(21)
Where as,
T is the test data .
m is the sample set. I = 1,2,3……r
𝐺𝑗 𝐺𝑗𝑖⁡are the proportions:
𝐺𝑗 = 𝑚𝑗/[𝑇]
𝐺𝑗𝑖 = 𝑚𝑗𝑖/[𝑇𝑗]
Attendance Using Facial Recognition Page 21
Figure 1.9: decision tree classifier basic structure.
NODE 4
NODE 1
NODE 2
NODE 3
Root of tree
I J K
subset
Leaf node
Attendance Using Facial Recognition Page 22
1.4.3 Technical Requirements
In this paper hardware and software play a major role. Regarding hardware, a standard computer needs
to be installed and placed in the school office room where student is entered. Camera must be
positioned in the office room to obtain the video with 25 fps and resolution 512 by 512 pixels.
Secondary memory s needed to store all the images and video database. Software requirements
MATLAB Version R2018a and Windows 10 is used with i3processor speed with quad-core 3.33 GHz
CPU and 2GB RAM.
Attendance Using Facial Recognition Page 23
1.5 RESULT AND DISCUSSION
The below figure1.8 shows the GUI which is built in matlab2018a version for attendance using face
recognition system. The first step is to input a proper supported format video with carefully chosen
frame rate of video. After that it will automatically detect face in video and store that face in a folder
called people. Once this step is over we need to go to count button. The count is basically count the
number of faces or person by using decision tree classifier. The last step is to identify for that we need
to go people identify button. When we press the people identify button it will match the identity of
faces with stored database and show the result according to that. If the person is valid then their name,
address will be shown if the person is not valid then message will be prompt person is not valid.
Figure 1.10: GUI for Attendance using facial Recognition System.
Attendance Using Facial Recognition Page 24
The below figure1.10 shows the database which is manually created by changing in matlab2018a
version command program. The face image is first stored in the folder called as data. In the data folder
total 76 face is stored and their name and address is manually input by the program developer. The face
image must be in RGB colour and JPG file format with sequence numbers. The face used in this
database is usually collected from different students with mobile camera. The input sample video used
in this project is download from internet.
Figure 1.11: Database created by input video.
The first step of project is to input source video file from database. The video file input will be from
the database, it does not support input directly with a webcam. If video file is loading from the database,
the figure 1.11 functionality is shown. The “uigetfile” on line 84 is an algorithm in MATLAB that will
load video file of MP4, MOV type format is supported and the “strcat” on line 86 adds the filename
that will read the video using the Matlab 2018a version “VideoReader” command is used on line 90 to
read the video in avi file format.
Attendance Using Facial Recognition Page 25
Figure 1.12: Input video from static source Folder/Database.
After video read by matlab in a sequence of n number of frame the vision.CascadeObjectDetector();
command in line no 95 is used for detect the face in an each frame. The delete('.People*.jpg'); is used
in line 101 for the purpose of deleting the old data. Then by using for loop in command line no 102
input video is read and display on axis no 1. The detected face is drown a border line and shown to
users. In line no 111 “imresize” command is used to resize the detected face in 128 * 128 format. The
imwrite(im2,fn); is used to write the image in jpg format in people folder.
Attendance Using Facial Recognition Page 26
Figure 1.13: Count total no of people faces in video.
The count is the second steps where the face data created in folder people in first step is going to read
all faces one by one in a sequence manner. The dn1=strcat(dn,'*.jpg'); command is used to read face
image from people folder. The for loop is used in line no 146 in which image is read and shown in axis
no2 after reading lbp_sir(im); command is use in line 152 which basically a local binary pattern
function. The axis 3 represent local binary pattern result. The NetTree=[]; command is used for decision
tree classification algorithm.
Attendance Using Facial Recognition Page 27
Figure 1.14: People identify.
The people identify callback has been embedded by the same gui. However, because the requirement
requested to display the input image side by side with the matched corresponding detail like address , name.
Figure 1.15: Restart the program.
The command line no 416-418 is used for restarting the program and the command clc used for clear
the command window the clear all function is to clear the history and close all will close all currently
running program which is also act as exit button as shown in figure 1.15
Figure 1.16: Exit the GUI.
Attendance Using Facial Recognition Page 28
Figure 1.17: Decision tree classification result.
Attendance Using Facial Recognition Page 29
1.6 CONCLUSION
In this paper various analysis of face detection, classification and feature extraction is discussing and
closely examine through database taken from record video and uses function from system identification
to detect faces. The type of examination conduct such as local binary pattern and decision tree
classification. The graphical user interface is created using matlab2018a version software. The
graphical user interface takes input video through user and detect faces then count no of people present
in the video after that it show result according to database created by the user and examine the result.
Experimental results closely examine and show that the technique used in this paper successfully
identifies face of person and matches with the database.
In the future work we can combine more feature extraction algorithm with same classifier to do more
research. By using different methods, we can achieve more accurate and sophisticate result. In order to
implement new technique and method more time is required. In future work the parameter is used in
finding local binary pattern of image can be improving and change. The graphical user interface is built
in matlab and can be easily modify so in future we convert this graphical user interface in a app builder
in standalone form so it is more secure and difficult to modify. Also user id and password section can
be implement in GUI which create additional security in this system. The database is created manually
by the organization staffs so it very important regarding data confidentiality. It is very important to
inform the persons that their face is used for the purpose of face attendance system. In this project face
data is created by persons face for experimental purpose but the real scenario will be totally different.
The figure 1.18 shows the gantt chat of the project. Basically this is implement in seven phases which
shows in gantt chat. The actual time taken to complete the project is representing by orange color bar
and the estimate time is representing by blue colour. The initial planning is made in the month of
December which include study of various algorithm related to face recognition. The next section in
GUI design which is develop in January month which include installation of matlab software and basic
learning to make a GUI. Data base creation involve collection of face of various students. The
algorithm implementation is the most difficult part of the project which include advance study of
matlab code and implement of proper algorithm. After implementation connection is made between
algorithm and GUI. Code optimization is very necessary for every project because it reduce the
Attendance Using Facial Recognition Page 30
execution time of project and also remove some error if exist. Report preparation is the final stage of
project.
Figure 1.18: Gantt chat.
25 March22 February15 January12 December
Estimate completion dates actual completion dates
Initial planning
GUI design
Database creation
Algorithm implement
(LBP & DTC)
Establish connection
between GUI and Algorithm
Code optimization
Report preparation
Attendance Using Facial Recognition Page 31
References
[1]. Kyungnam Kim “Face Recognition using Principle Component Analysis”, Department of
Computer Science, University of Maryland, College Park, MD 20742, USA..
[2]. H.K.Ekenel and R.Stiefelhagen,Analysis of local appearance based face recognition: Effe
cts of feature selection and feature normalization. In CVPR Biometrics Workshop, New York, USA,
2006.
[3] Bhumika G. Bhatt, Zankhana H. Shah “Face Feature Extraction Techniques: A Survey”, National
Conference on Recent Trends in Engineering & Technology, 13-14 May 2011..
[4]. Dr. Nita Thakare , Meghna Shrivastava , Nidhi Kumari ,Neha Kumari , Darleen Kaur , Rinku Singh
.”Face Detection And Recognition For Automatic Attendance System.” International Journal of
Computer Science and Mobile Computing, Vol.5 Issue.4,.
[5]. Cohen L (1989) Time–frequency distributions a review. Proc IEEE 77(7):941–981.
[6]. D. Lowe, “Distince image features from scale-invariant keypoints,” Int. Journal of Computer
Vision, vol.60, no.2, pp.91-110, 2004
[7]. M. Bicego, A. Lagorio, E. Grosso, and M. Tistarelli, “On the use of SIFT features for face
authentication,” Proc. of IEEE Int Workshop on Biometrics, in association with CVPR, NY, 2006
[8]. ] K. Okajima, “Two-dimensional Gabor-type receptive field as derived by mutual information
maximization,” Neural Networks, vol. 11, no. 3, pp. 441–447, 1998.
[9] Ojala, T., Pietik•ainen, M., Harwood, D.: A comparative study of texture measures with
classication based on featured distributions. Pattern Recognition 29 (1996) 51-59.
Attendance Using Facial Recognition Page 32
[10] X. Wang, T. X. Han, and S. Yan, “An HOG-LBP human detector with partial occlusion handling,”
in 2009 IEEE 12th International Conference on Computer Vision, pp. 32–39..
[11]. “Local Binary Patterns: New Variants and Applications.”,” IEEE Transactions, Vol. 61, No. 4,
pp. 990 – 1001, 2012.
[12]. J.G. Daugman: Uncertainty relations for resolution in space, spatial frequency, and orientation
optimized by two-dimensional visual cortical filters, Journal of the Optical Society of America A, vol.
2, pp. 1160-1169,1985.
[13] J. R. Quinlan, “Induction of decision trees,” vol. 1, no. 1, pp. 81–106..
[14] Automatic Design of Decision-Tree Induction Algorithms — Rodrigo C. Barros — Springer.
[15] L. Breiman, J. Friedman, C. J. Stone, and R. Olshen. Classification and Regression Trees.
.
[16] L. Rokach and O. Maimon, “Top-down induction of decision trees classifiers - a survey,” vol.
35, no. 4, pp. 476–487.
[17]. G. Donato, M.S. Bartlett, J.C. Hager, “Classifying Facial Actions”, IEEE Trans. Pattern Analysis
and Machine Intelligence, Vol. 21, 1999, pp.974-989 .
[18]. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, SURF: Speeded Up Robust Features (SURF),
Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346–359, 2008
[19]. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, SURF: Speeded Up Robust Features (SURF),
Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346–359, 2008
[20]. Wang, Y., Ai, H., Wu, B., & Huang, C. (2004, August). Real time facial expression recognition
with adaboost. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR
2004. (Vol. 3, pp. 926-929). IEEE.
Attendance Using Facial Recognition Page 33
[21]. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, SURF: Speeded Up Robust Features (SURF),
Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346–359, 2008
[22]. Ryu, H., Chun, S. S. and Sull, S. (2006) 'Multiple classifiers approach for computational
efficiency in multi-scale search based face detection', Advances in Natural Computation, Pt 1, 4221
pp. 483-492.
[23] Rowley, H. A., Baluja, S. and Kanade, T. (1998) 'Neural network-based face detection', Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 20 (1), pp. 23-38.
[24] Hjelmås, E. and Low, B. K. (2001) 'Face Detection: A Survey', Computer Vision and Image
Understanding, 83 (3), pp. 236-274.
[25] Viola, P. and Jones, M. (2001) 'Rapid object detection using a boosted cascade of simple features',
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
1 pp. I511-I518.
[26] Wang, K., Song, Z., Sheng, M., He, P. and Tang, Z. (2015) 'Modular Real-Time Face Detection
System', Annals of Data Science, 2 (3), pp. 317-333.
[27] Thai, L. H., Nguyen, N. D. T. and Hai, T. S. (2011) 'A Facial Expression Classification System
Integrating Canny, Principal Component Analysis and Artificial Neural Network', International Journal
of Machine Learning and Computing, 1 (4), pp. 388-393.

Contenu connexe

Tendances

Automatic Attendance system using Facial Recognition
Automatic Attendance system using Facial RecognitionAutomatic Attendance system using Facial Recognition
Automatic Attendance system using Facial RecognitionNikyaa7
 
Automatic Attendance Using Face Recognition
Automatic Attendance Using Face RecognitionAutomatic Attendance Using Face Recognition
Automatic Attendance Using Face Recognitionrahulmonikasharma
 
Attendence management system using face detection
Attendence management system using face detectionAttendence management system using face detection
Attendence management system using face detectionSaurabh Sutone
 
Attendance system based on face recognition using python by Raihan Sikdar
Attendance system based on face recognition using python by Raihan SikdarAttendance system based on face recognition using python by Raihan Sikdar
Attendance system based on face recognition using python by Raihan Sikdarraihansikdar
 
Face recognition attendance system
Face recognition attendance systemFace recognition attendance system
Face recognition attendance systemNaomi Kulkarni
 
Attendance Management System using Face Recognition
Attendance Management System using Face RecognitionAttendance Management System using Face Recognition
Attendance Management System using Face RecognitionNanditaDutta4
 
Face Detection and Recognition System
Face Detection and Recognition SystemFace Detection and Recognition System
Face Detection and Recognition SystemZara Tariq
 
Project presentation by Debendra Adhikari
Project presentation by Debendra AdhikariProject presentation by Debendra Adhikari
Project presentation by Debendra AdhikariDEBENDRA ADHIKARI
 
Face detection and recognition
Face detection and recognitionFace detection and recognition
Face detection and recognitionPankaj Thakur
 
Face Recognition based Lecture Attendance System
Face Recognition based Lecture Attendance SystemFace Recognition based Lecture Attendance System
Face Recognition based Lecture Attendance SystemKarmesh Maheshwari
 
Face recognition attendance system using Local Binary Pattern (LBP)
Face recognition attendance system using Local Binary Pattern (LBP)Face recognition attendance system using Local Binary Pattern (LBP)
Face recognition attendance system using Local Binary Pattern (LBP)journalBEEI
 
Face recognition technology
Face recognition technologyFace recognition technology
Face recognition technologyShubhamLamichane
 
Face recognition attendance system
Face recognition attendance systemFace recognition attendance system
Face recognition attendance systemmohanaprasad_v
 
Smart application for ams using face recognition
Smart application for ams using face recognitionSmart application for ams using face recognition
Smart application for ams using face recognitioncseij
 
Smart attendance system using facial recognition
Smart attendance system using facial recognitionSmart attendance system using facial recognition
Smart attendance system using facial recognitionVigneshLakshmanan8
 
Automated Face Detection System
Automated Face Detection SystemAutomated Face Detection System
Automated Face Detection SystemAbhiroop Ghatak
 
Facial recognition system
Facial recognition systemFacial recognition system
Facial recognition systemDivya Sushma
 
Face Detection Attendance System By Arjun Sharma
Face Detection Attendance System By Arjun SharmaFace Detection Attendance System By Arjun Sharma
Face Detection Attendance System By Arjun SharmaArjun Agnihotri
 

Tendances (20)

Automatic Attendance system using Facial Recognition
Automatic Attendance system using Facial RecognitionAutomatic Attendance system using Facial Recognition
Automatic Attendance system using Facial Recognition
 
Automatic Attendance Using Face Recognition
Automatic Attendance Using Face RecognitionAutomatic Attendance Using Face Recognition
Automatic Attendance Using Face Recognition
 
Attendence management system using face detection
Attendence management system using face detectionAttendence management system using face detection
Attendence management system using face detection
 
Attendance system based on face recognition using python by Raihan Sikdar
Attendance system based on face recognition using python by Raihan SikdarAttendance system based on face recognition using python by Raihan Sikdar
Attendance system based on face recognition using python by Raihan Sikdar
 
Face recognition attendance system
Face recognition attendance systemFace recognition attendance system
Face recognition attendance system
 
Attendance Management System using Face Recognition
Attendance Management System using Face RecognitionAttendance Management System using Face Recognition
Attendance Management System using Face Recognition
 
Face Detection and Recognition System
Face Detection and Recognition SystemFace Detection and Recognition System
Face Detection and Recognition System
 
Project presentation by Debendra Adhikari
Project presentation by Debendra AdhikariProject presentation by Debendra Adhikari
Project presentation by Debendra Adhikari
 
Face detection and recognition
Face detection and recognitionFace detection and recognition
Face detection and recognition
 
Face Recognition based Lecture Attendance System
Face Recognition based Lecture Attendance SystemFace Recognition based Lecture Attendance System
Face Recognition based Lecture Attendance System
 
Face recognition attendance system using Local Binary Pattern (LBP)
Face recognition attendance system using Local Binary Pattern (LBP)Face recognition attendance system using Local Binary Pattern (LBP)
Face recognition attendance system using Local Binary Pattern (LBP)
 
Face recognition technology
Face recognition technologyFace recognition technology
Face recognition technology
 
Face recognition attendance system
Face recognition attendance systemFace recognition attendance system
Face recognition attendance system
 
Smart application for ams using face recognition
Smart application for ams using face recognitionSmart application for ams using face recognition
Smart application for ams using face recognition
 
Smart attendance system using facial recognition
Smart attendance system using facial recognitionSmart attendance system using facial recognition
Smart attendance system using facial recognition
 
Automated Face Detection System
Automated Face Detection SystemAutomated Face Detection System
Automated Face Detection System
 
Facial recognition system
Facial recognition systemFacial recognition system
Facial recognition system
 
Face recognition
Face recognitionFace recognition
Face recognition
 
Face recognisation system
Face recognisation systemFace recognisation system
Face recognisation system
 
Face Detection Attendance System By Arjun Sharma
Face Detection Attendance System By Arjun SharmaFace Detection Attendance System By Arjun Sharma
Face Detection Attendance System By Arjun Sharma
 

Similaire à Attendance Using Facial Recognition

Real Time Image Based Attendance System using Python
Real Time Image Based Attendance System using PythonReal Time Image Based Attendance System using Python
Real Time Image Based Attendance System using PythonIRJET Journal
 
Comparative Analysis of Face Recognition Methodologies and Techniques
Comparative Analysis of Face Recognition Methodologies and TechniquesComparative Analysis of Face Recognition Methodologies and Techniques
Comparative Analysis of Face Recognition Methodologies and TechniquesFarwa Ansari
 
Attendance System using Facial Recognition
Attendance System using Facial RecognitionAttendance System using Facial Recognition
Attendance System using Facial RecognitionIRJET Journal
 
IRJET- Intelligent Automated Attendance System based on Facial Recognition
IRJET-  	  Intelligent Automated Attendance System based on Facial RecognitionIRJET-  	  Intelligent Automated Attendance System based on Facial Recognition
IRJET- Intelligent Automated Attendance System based on Facial RecognitionIRJET Journal
 
ATTENDANCE BY FACE RECOGNITION USING AI
ATTENDANCE BY FACE RECOGNITION USING AIATTENDANCE BY FACE RECOGNITION USING AI
ATTENDANCE BY FACE RECOGNITION USING AIIRJET Journal
 
Attendance System using Face Recognition
Attendance System using Face RecognitionAttendance System using Face Recognition
Attendance System using Face RecognitionIRJET Journal
 
IRJET- Autonamy of Attendence using Face Recognition
IRJET- Autonamy of Attendence using Face RecognitionIRJET- Autonamy of Attendence using Face Recognition
IRJET- Autonamy of Attendence using Face RecognitionIRJET Journal
 
IRJET- Attendance Management System using Real Time Face Recognition
IRJET- Attendance Management System using Real Time Face RecognitionIRJET- Attendance Management System using Real Time Face Recognition
IRJET- Attendance Management System using Real Time Face RecognitionIRJET Journal
 
IRJET- Survey on Various Techniques of Attendance marking and Attention D...
IRJET-  	  Survey on Various Techniques of Attendance marking and Attention D...IRJET-  	  Survey on Various Techniques of Attendance marking and Attention D...
IRJET- Survey on Various Techniques of Attendance marking and Attention D...IRJET Journal
 
AUTOMATION OF ATTENDANCE USING DEEP LEARNING
AUTOMATION OF ATTENDANCE USING DEEP LEARNINGAUTOMATION OF ATTENDANCE USING DEEP LEARNING
AUTOMATION OF ATTENDANCE USING DEEP LEARNINGIRJET Journal
 
IRJET- Implementation of Attendance System using Face Recognition
IRJET- Implementation of Attendance System using Face RecognitionIRJET- Implementation of Attendance System using Face Recognition
IRJET- Implementation of Attendance System using Face RecognitionIRJET Journal
 
IRJET - Facial Recognition based Attendance Management System
IRJET - Facial Recognition based Attendance Management SystemIRJET - Facial Recognition based Attendance Management System
IRJET - Facial Recognition based Attendance Management SystemIRJET Journal
 
IRJET - Automated Attendance System using Multiple Face Detection and Rec...
IRJET -  	  Automated Attendance System using Multiple Face Detection and Rec...IRJET -  	  Automated Attendance System using Multiple Face Detection and Rec...
IRJET - Automated Attendance System using Multiple Face Detection and Rec...IRJET Journal
 
1410482042(Farhat Tasnim) & 1621802042(K.M.H.Mubin) sec 11.pptx
1410482042(Farhat Tasnim) & 1621802042(K.M.H.Mubin) sec 11.pptx1410482042(Farhat Tasnim) & 1621802042(K.M.H.Mubin) sec 11.pptx
1410482042(Farhat Tasnim) & 1621802042(K.M.H.Mubin) sec 11.pptxpproject345
 
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITION
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITIONA VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITION
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITIONIRJET Journal
 
Face Recognition Based Attendance System with Auto Alert to Guardian using Ca...
Face Recognition Based Attendance System with Auto Alert to Guardian using Ca...Face Recognition Based Attendance System with Auto Alert to Guardian using Ca...
Face Recognition Based Attendance System with Auto Alert to Guardian using Ca...ijtsrd
 
Attendance management system using face recognition
Attendance management system using face recognitionAttendance management system using face recognition
Attendance management system using face recognitionIAESIJAI
 
IRJET- Student Attendance System by Face Detection
IRJET- Student Attendance System by Face DetectionIRJET- Student Attendance System by Face Detection
IRJET- Student Attendance System by Face DetectionIRJET Journal
 
IRJET- Free & Generic Facial Attendance System using Android
IRJET- Free & Generic Facial Attendance System using AndroidIRJET- Free & Generic Facial Attendance System using Android
IRJET- Free & Generic Facial Attendance System using AndroidIRJET Journal
 

Similaire à Attendance Using Facial Recognition (20)

Real Time Image Based Attendance System using Python
Real Time Image Based Attendance System using PythonReal Time Image Based Attendance System using Python
Real Time Image Based Attendance System using Python
 
Comparative Analysis of Face Recognition Methodologies and Techniques
Comparative Analysis of Face Recognition Methodologies and TechniquesComparative Analysis of Face Recognition Methodologies and Techniques
Comparative Analysis of Face Recognition Methodologies and Techniques
 
Attendance System using Facial Recognition
Attendance System using Facial RecognitionAttendance System using Facial Recognition
Attendance System using Facial Recognition
 
IRJET- Intelligent Automated Attendance System based on Facial Recognition
IRJET-  	  Intelligent Automated Attendance System based on Facial RecognitionIRJET-  	  Intelligent Automated Attendance System based on Facial Recognition
IRJET- Intelligent Automated Attendance System based on Facial Recognition
 
ATTENDANCE BY FACE RECOGNITION USING AI
ATTENDANCE BY FACE RECOGNITION USING AIATTENDANCE BY FACE RECOGNITION USING AI
ATTENDANCE BY FACE RECOGNITION USING AI
 
Attendance System using Face Recognition
Attendance System using Face RecognitionAttendance System using Face Recognition
Attendance System using Face Recognition
 
IRJET- Autonamy of Attendence using Face Recognition
IRJET- Autonamy of Attendence using Face RecognitionIRJET- Autonamy of Attendence using Face Recognition
IRJET- Autonamy of Attendence using Face Recognition
 
IRJET- Attendance Management System using Real Time Face Recognition
IRJET- Attendance Management System using Real Time Face RecognitionIRJET- Attendance Management System using Real Time Face Recognition
IRJET- Attendance Management System using Real Time Face Recognition
 
IRJET- Survey on Various Techniques of Attendance marking and Attention D...
IRJET-  	  Survey on Various Techniques of Attendance marking and Attention D...IRJET-  	  Survey on Various Techniques of Attendance marking and Attention D...
IRJET- Survey on Various Techniques of Attendance marking and Attention D...
 
AUTOMATION OF ATTENDANCE USING DEEP LEARNING
AUTOMATION OF ATTENDANCE USING DEEP LEARNINGAUTOMATION OF ATTENDANCE USING DEEP LEARNING
AUTOMATION OF ATTENDANCE USING DEEP LEARNING
 
IRJET- Implementation of Attendance System using Face Recognition
IRJET- Implementation of Attendance System using Face RecognitionIRJET- Implementation of Attendance System using Face Recognition
IRJET- Implementation of Attendance System using Face Recognition
 
ppt.pdf
ppt.pdfppt.pdf
ppt.pdf
 
IRJET - Facial Recognition based Attendance Management System
IRJET - Facial Recognition based Attendance Management SystemIRJET - Facial Recognition based Attendance Management System
IRJET - Facial Recognition based Attendance Management System
 
IRJET - Automated Attendance System using Multiple Face Detection and Rec...
IRJET -  	  Automated Attendance System using Multiple Face Detection and Rec...IRJET -  	  Automated Attendance System using Multiple Face Detection and Rec...
IRJET - Automated Attendance System using Multiple Face Detection and Rec...
 
1410482042(Farhat Tasnim) & 1621802042(K.M.H.Mubin) sec 11.pptx
1410482042(Farhat Tasnim) & 1621802042(K.M.H.Mubin) sec 11.pptx1410482042(Farhat Tasnim) & 1621802042(K.M.H.Mubin) sec 11.pptx
1410482042(Farhat Tasnim) & 1621802042(K.M.H.Mubin) sec 11.pptx
 
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITION
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITIONA VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITION
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITION
 
Face Recognition Based Attendance System with Auto Alert to Guardian using Ca...
Face Recognition Based Attendance System with Auto Alert to Guardian using Ca...Face Recognition Based Attendance System with Auto Alert to Guardian using Ca...
Face Recognition Based Attendance System with Auto Alert to Guardian using Ca...
 
Attendance management system using face recognition
Attendance management system using face recognitionAttendance management system using face recognition
Attendance management system using face recognition
 
IRJET- Student Attendance System by Face Detection
IRJET- Student Attendance System by Face DetectionIRJET- Student Attendance System by Face Detection
IRJET- Student Attendance System by Face Detection
 
IRJET- Free & Generic Facial Attendance System using Android
IRJET- Free & Generic Facial Attendance System using AndroidIRJET- Free & Generic Facial Attendance System using Android
IRJET- Free & Generic Facial Attendance System using Android
 

Dernier

IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 

Dernier (20)

IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 

Attendance Using Facial Recognition

  • 2. Attendance Using Facial Recognition Page 2 1.1 Abstract Face recognition is a biometric identification process which demand as well as performance increase rapidly over each and every year, and such systems are mainly used for security and commercial purpose. an automatic system for face recognition in a real time background for a school to mark the group action of their staff. The task is extremely difficult because the real time background subtraction in a picture remains a challenge [1]. To discover real time, face square measure used and a simple quick Principal Element Analysis the faces detected with a high accuracy rate. The matched face is employed to mark group action of the worker. Manual coming into of group action in logbooks becomes a troublesome task and it also wastes the time. thus we develop module that includes of face recognition to manage the group action records of staff [2]. This enrolling could be a quondam method and their face can be hold on within the info. throughout enrolling of face we tend to need a system since it's a quondam process. you'll have your own roll variety as your worker id which can be distinctive for each worker. The presence of every worker are updated in an exceedingly info. The results showed improved performance over manual group action management system. group action is marked once worker identification. This product provides way more solutions with correct results in user interface guide and leave management systems. In this paper first people investigation and recognition in video. The video suppurated format for this project is MP4, AVI and wmv files. It first extracts frame from then video then extracts all the faces from each frame, puts the faces in a directory folder known as database. Once you choose your input video, it extracts all the faces in directory through extracts local binary pattern (LBP) options and count number of faces in video through binary tree classifier. Finally, it shown range of persons present within the scene. After that it matches the faces present in the test folder and then identity the person name and address. The database contains 75 student information’s which is manually created and can be increase and decrease depend on student take admission in institute. KEYWORDS Attendance using facial Recognition GUI program, facial Recognition, Classification algorithm, binary tree classifier, extracts LBP features.
  • 3. Attendance Using Facial Recognition Page 3 1.2 INTRODUCTION Any organization like school, college, industry, or business and so on. The attendance plays an important role in such kind of organisations and due to time consuming process also it requires manpower [3] and finical resources. So we have to think about a scenario where each student in an classroom call one by one for recording their presence and absence in school register sheet. This issues is solved by an automated face recognition system. There are some automatic attendances creating system that are presently utilized by abundant establishment. One of such system is biometric technique. Although it's automatic and a step earlier than ancient methodology it fails to satisfy the time constraint. This project introduces attending marking system, devoid of any kind of interference with the normal teaching procedure. The system is often conjointly enforced throughout examination sessions or in alternative teaching activities wherever attending is extremely essential. This system eliminates classical student identification like career name of the scholar, or checking individual identification cards of the scholar, which might not solely interfere with the continuing teaching process, but also can be stressful for students during examination sessions. The attendance is most important in student point of view because if the student fail to attend classes it may not allow to sit in the examination which will more important. Regarding teacher point of view, it takes time to maintain attendance record which will also effect on teaching time. But make record of attendance it is compulsory and mandatory in each and every organisation. There are many techniques developed day by day for marking record of attendance like, biometric eye recognition, fingerprint recolonization, and most important face recognition. Face Recognition is related to image processing which will performs very well in low light condition. Our old system which include manual process are facing some serious issues like, manipulation, loss of records, more time require, fake attendance. The use of pen and paper also cause to damage in environment. The automated face recognition system is more accurate and secure. Now a day people are moving towards paperless work and more like to digitalised.
  • 4. Attendance Using Facial Recognition Page 4 Figure 1.1: Flow chart. LBP START INPUT VIDEO DETECT FACE LBP COUNT NO OF PERSON USING TREE CLASIFIER If else condition apply STOP NO YES
  • 5. Attendance Using Facial Recognition Page 5 On this paper, local binary pattern and decision tree classifier algorithm is used to implement the proposed work. Below are the steps involved in the project. Step #1: Read the video in MP4, MOV, Step #2: Detect face in each frame Step #3: Store face with it LBP features. Step #4: Count the total number of faces using tree classifier. Step #5: If yes, then Show the detail of person with database. Step #6: If no, then message is prompt. Person is not valid The basic application of face recognition is person identification or verification system is used in school, hospital, corporate and at the customs authorities on an airport. A face recognition system could replace by the current unreliable and outdated system identification methods. There are several methods which already used for attendance like entering the pin code into attendance system, using password for the attendance system, using id card for the attendance system. The disadvantage of methods like these is that they rely on the cooperation of the participants, whereas a person identification system based on the analysis of (frontal) images of the face can be effective without the participant’s cooperation or knowledge. Despite of the actual fact that at this moment already varied of business face recognition systems are in use, this way of identification continues to be an interesting topic for researchers. This is thanks to the actual fact that this system performs well below comparatively easy and controlled environments, but perform much worse when variations in different factors are present, such as pose, viewpoint, facial expressions, time when the pictures are made.
  • 6. Attendance Using Facial Recognition Page 6 1.3 LITERATURE REVIEW The basic process of person identification by using face recognition can be into four main sections as shown in figure 1.1. These are detection and normalization, feature extraction such as Histograms of Oriented Gradients (HOG), Scale Invariant Feature Transform (SIFT), Speed-up robust features (SURF) and Local binary pattern (LBP) and classification using decision tree classifier. In the face detection and normalization part, the video frame image is scaled and rotated till and cropped the faces from the video frames. The figure 1.2 is the Shows the basic operation of face detection using local binary pattern in which the first step is to detect the face from input data. The input data can be an image or video. After the input data the face is detected from input data and is normalized according to user requirement. The feature is extracted in this stage from detected face. The feature extraction algorithm such as HOG, SURF or LBP etc which is popularly used. The classification stage is the final stage in which the image facture is matched with the database stored feature. If the feature is matched with database then it mark as present if it is not matched with the database then it is mark as absent. Figure 1.2: The original local binary pattern (LBP) operator FACE DETECTION AND NORMALIZATI ON CLASSIFICATION DATABASE FEATURE EXTRATION
  • 7. Attendance Using Facial Recognition Page 7 RESEARCH PAPER METHOD USED IN PAPER ACCURACY OF RESULT FALSE DETECTION Yang et al (2002 pp.36-37) Knowledge-Based-Method 83.33% 28 Ryu et al. (2006) [22] Image-Based Method 89.1% 32 Feraud et al. (2001) neural network-based 86.0% 8 Rowley et al. (1998) [23] Neural Network-Based 86.2% 23 Wang et al. (2016) CNN-Based 98.1% Hjelmås and Low, (2001, p.240) [24] Edge Detection-Based 76% 30 Viola and Jones (2001). [25] Viola-Jones 88.84% 103 Wang et al, (2015, p.318) [26] PCA with SVM) 89% 110 Thai et al. (2011) [27] Canny , PCA, ANN 85.7% N/A TABLE 1.1: face detection paper with different methods.
  • 8. Attendance Using Facial Recognition Page 8 1.3.1 Gabor filters 𝑓(𝑟, 𝑡, 𝛽, 𝛿, 𝛾𝑟, 𝛾𝑡) = 1 2𝜋𝛾𝑟,𝛾𝑡 exp⁡[ −1 2 (( 𝑟 𝛾𝑟 )2 + ( 𝑡 𝛾𝑡 )2 ) + 𝑗𝛽(𝑟 cos 𝜃 +⁡t⁡sin 𝜃)]--------------------------(1) Where as 𝛾⁡𝐼𝑠 the spatial spread 𝛽 Is the frequency 𝛿 Is the orientation Gabor filters has been found to be particularly appropriate for image texture representation and discrimination. From theoretic view point, given by Okajima [8]. derived Gabor functions as solutions for a certain mutual information maximization problem. It shows that the Gabor receptive field can extract the maximum information from local image regions. Researchers have also shown that Gabor features, when appropriately designed, are invariant against translation, rotation, and scale [12]. Gabor filter could be a linear filter used for edge detection. In spatial domain shown in paper [17], a 2D Gabor filter is a Gaussian kernel function modulated by a sinusoidal plane wave. The filter incorporates a real associate degreed an unreal part representing orthogonal directions. The two components may be shaped into a fancy range or used individually Real ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑥(𝑟, 𝑡; 𝜕, 𝛼, ∄, 𝛿, 𝜑) = exp⁡(− 𝑟2+𝑡2 2𝛿2 ) cos(2𝜋 𝑟2 𝜕 + ∄)---------------------------------(2) Imaginary ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑥(𝑟, 𝑡; 𝜕, 𝛼, ∄, 𝛿, 𝜑) = exp⁡(− 𝑟2+𝑡2 2𝛿2 ) sin(2𝜋 𝑟2 𝜕 + ∄)---------------------------------(3)
  • 9. Attendance Using Facial Recognition Page 9 where ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑟, = 𝑟 cos ∅ + 𝑟⁡ sin ∅---------------------------------------------------------------------(4) Gabor filter of the face image is that the result of video frame 𝑉(𝑟, 𝑡) convolution with the bank of Gabor filters⁡𝑓𝑢,𝑣(𝑟, 𝑡)⁡. The convolution result is complex worth which might be rotten to real and imaginary part: ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐹𝑢,𝑣(𝑟, 𝑡) = 𝑉(𝑟, 𝑡) ∗ 𝑓𝑢,𝑣(𝑟, 𝑡)-----------------------------------------------(5) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑃𝑢,𝑣(𝑟, 𝑡) = 𝑅𝑒𝐹𝑢,𝑣(𝑟, 𝑡)⁡------------------------------------------------------(6) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑄 𝑢,𝑣(𝑟, 𝑡) = 𝐼𝑚𝐹𝑢,𝑣(𝑟, 𝑡)⁡⁡⁡⁡⁡⁡⁡⁡------------------------------------------------(7) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑆⁡ 𝑢,𝑣(𝑟, 𝑡) = √ 𝑃𝑢,𝑣 2 (𝑟, 𝑡) + 𝑄 𝑢,𝑣 2 (𝑟, 𝑡) ------------------------------------------------(8) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡∅ 𝑢,𝑣(𝑟, 𝑡) = 𝑎𝑟𝑐 tan ( 𝑄 𝑢,𝑣(𝑟,𝑡) 𝑃 𝑢,𝑣(𝑟,𝑡) )⁡⁡⁡-----------------------------------------------------(9)
  • 10. Attendance Using Facial Recognition Page 10 1.3.2 Histograms of Oriented Gradients (HOG) Histograms of Oriented Gradients are generally used in computer vision, pattern recognition and image processing to detect and recognize visual objects (i.e. faces recognition). They are computed on a dense grid of cells that overlap local contrast histogram normalizations of image gradient orientations to improve the detector performance [5]. So that, this feature set performs very well for different form primarily based object categories (i.e. face detection) because of the distribution of local intensity gradients, even not any knowledge of the corresponding gradient [4]. To extract HOG descriptors, first count the occurrences of edge orientations during a native neighborhood of a picture. video is taken as an input. which is probably given by user. Gradient calculation is used and median filter to perform filtering by value [1 0 1] [-1 0 -1], the image vertical gradient and horizontal gradient can be calculated. The input video is converted into a sequence of image frame. This image is divided into average tiny cell size of 256*256 pixels. Each cell is further divided into four small blocks and each block size is considering as 2*2 pixels. Using histogram of oriented gradient bar graph is obtained. The coordinate of the bar graph represents the 13 direction channels elite in step three. Normalization is the process in which vector is represent and associated with pixels. local contrast is correcting by block normalization and also normalized histograms of each block cells. ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐿 𝑟(𝑟, 𝑡) = 𝐾(𝑟 + 1, 𝑡) − 𝐾(𝑟 − 1, 𝑡) ------------------------------------------------(10) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐿 𝑟(𝑟, 𝑡) = 𝐾(𝑟, 𝑡 + 1) − 𝐾(𝑟, 𝑡 − 1)⁡⁡---------------------------------------------(11) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐿(𝑟, 𝑡) = √𝐿 𝑟 2(𝑟, 𝑡) + 𝐿 𝑡 2 (𝑟, 𝑡) ----------------------------------------------(12) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡∅(𝑟, 𝑡) = tan−1 ( 𝐿 𝑡(𝑟,𝑡) 𝐿 𝑟(𝑟,𝑡) ) -------------------------------------------------------(13)
  • 11. Attendance Using Facial Recognition Page 11 Histogram of Oriented Gradient is a algorithm that uses for local reference of coordinate of images, and by calculating the local direction of gradient. At present, the approach HOG is applied to image recognition and achieved a good success rate in human face detection. The HOG feature is based on histogram of oriented gradient. It can not only describe the feature of face contours, but also be not sensitive to light and small offset. Obtain the human options countenance| facial expression face expression by combining the features of all blocks in line. Take the input image of 256*256 as associate example shown in fig eleven shows the procedure of extracting depth image’s HOG options, we calculate the HOG feature as follows: 1) video is taken as an input. which is probably given by user. 2) Gradient calculation is used and median filter to perform filtering by value [1 0 1] [-1 0 -1], the image vertical gradient and horizontal gradient can be calculated. 3)The input video is converted into a sequence of image frame. This image is divided into average tiny cell size of 256*256 pixels. Each cell is further divided into four small blocks and each block size is considering as 2*2 pixels. 5)Using histogram of oriented gradient bar graph is obtained. The coordinate of the bar graph represents the 13 direction channels elite in step three. 6) Normalization is the process in which vector is represent and associated with pixels. local contrast is correcting by block normalization and also normalized histograms of each block cells.
  • 12. Attendance Using Facial Recognition Page 12 1.3.3 Scale Invariant Feature Transform (SIFT) Scale Invariant Feature Transform (SIFT)was first proposed by Lowe [6] becomes one of the research interests for pattern recognition because of its excellent performance on object recognition. The SIFT method first detects the local key points then stable for pictures in several resolutions and uses scale and rotation to represent the key-points. SIFT features are quite similar with LBP features with local histogram patterns representing on the whole face image. Although SIFT has excellent performance in visual perception, whether it is a good descriptor for face images should be analyzed more. Because object recognition requires only coarse features whereas face recognition wants rather more discriminative features. An investigation of SIFT features on face representation has ever been done as the first decide to analyze the SIFT approach in face analysis context [7]. 𝐿(𝑟, 𝑡, Ԁ) = (𝑆(𝑟, 𝑡, 𝑘Ԁ)) − 𝑆(𝑟, 𝑡, Ԁ) ∗ 𝑃(𝑟, 𝑡) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡= 𝐽(𝑟, 𝑡, 𝑘Ԁ) − 𝐽(𝑟, 𝑡, Ԁ) -----------------------------------------(14) Then local maxima and minima of 𝐿(𝑟, 𝑡, Ԁ) are computed based on comparing each sample point to its eight neighbors in current image and nine neighbors in the scale above and below. At this scale, the gradient magnitude 𝑔(𝑟, 𝑡) , and ∅(𝑟, 𝑡)⁡orientation, , is computed using pixel differences in Thereafter, an orientation is determined by building a histogram of gradient orientations weighted by the gradient magnitudes from the key-point’s neighborhood and it is assigned to each interest point combined with the scale above and provides a scale and rotation invariant coordinate system for the descriptor 𝑔(𝑟, 𝑡) = √(𝐽(𝑟 + 1𝑡) − 𝐽(𝑟 − 1𝑡))2+(𝐽(𝑟, 𝑡 + 1) − 𝐽(𝑟, 𝑡 − 1))2 -------------------------------------(15) ∅(𝑟, 𝑡) = tan−1 ((𝐽(𝑟, 𝑡 + 1) − 𝐽(𝑟, 𝑡 − 1)) (𝐽(𝑟 + 1𝑡) − 𝐽(𝑟 − 1𝑡))⁄ ------------------------------------(16)
  • 13. Attendance Using Facial Recognition Page 13 Scale Invariant Feature Transform Descriptor, proposed by David Lowe, permits the local matching between different images by using the invariants Key points which are robust to scale and rotation. The SIFT Descriptor’s calculation could be accomplished in four steps: 1) Detecting the potential Key points in the image by using the Gaussian of difference (GoD). The Gaussian of difference is represented by Equation 17 which is shown below. ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐺𝑜𝐷(𝑙, 𝑚, ∅) = [𝐷(𝑙, 𝑚, 𝑘∅)] − [𝐷(𝑙, 𝑚, ∅)] ∗ 𝐼(𝑙, 𝑚) ---------------------------------(17) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝐷( 𝑙, 𝑚, ∅) = 1 2𝜋∅2 𝑒 (𝑙2+𝑚2) 2∅2 ----------------------------------------------------(18) Whereas the parameter is used in Equation 18 are given below D is representing for Gaussian kernel, k is the scale factor, 𝐼(𝑙, 𝑚) is the source image. 2) The Key points that present a maximum or minimum are stable so we keep them. The other points are instable and they’re rejected. 3) An orientation and magnitude is assigned to each key point. 4) Each key point is coded into a vector with a 128 dimensions which is invariant to scale, rotation and illumination changes. So basically when we apply the SIFT algorithm on the image, we detect a certain number of Key points N that describes the image. On one hand the quantity of key points depends on the SIFT parameters such as number of octaves, edge threshold and kernel Gaussian and so on. On the other hand on the image type such as RGB, gray-scale, depth map and binary. All key points are gathered in a matrix named SIFT matrix, in which the number of columns is set on 128, and the number of lines equals N. After that the K-means algorithm transforms the SIFT matrix of RGB, Saliency map and LTB images into vectors. These vectors are then concatenated in a single vector which will be used in the classification.
  • 14. Attendance Using Facial Recognition Page 14 1.3.4 Speed-up robust features (SURF) SURF in full form written as speed up robust features may be a scale and in-plane rotation invariant feature. The SURF [18] feature is invariant to rotation, scale, brightness. For the application of face recognition, invariance with respect to rotation is often not necessary. Therefore, we have used the upright version of the SURF descriptor. The Speed up robust features algorithm interest point detector is find by computing integral image and then apply 2nd derivative (approximate) filters to image Non- maximal suppression. To find local maxima in (x,y,s) space the Quadratic interpolation Interest point descriptor is used. The window is divided into 4*4 matrix (16 sub windows Compute Haar wavelet outputs Within each sub window, This yields a 64-element descriptor. For 9x9 filter, l0 = 3 and the length of positive or negative lobe in direction of derivative. To keep central pixel, must increase l0 by minimum of 2 pixels increase filter dimension by 6 Therefore, sizes of filter will be in this 9x9, 15x15, 21x21, 27x27. Once interest point has been found – Place window around point – Divide into 4x4 sub windows – In each sub window, • measure at 25 (5x5) places: dx and dy sum over all 25 places to get 4 values:
  • 15. Attendance Using Facial Recognition Page 15 Figure 1.3: Speed-up robust features example • First octave filter sizes: 9, 15, 21, 27 • Second octave sizes: 15, 27, 39, 51 – Increase by 12 each time (not 6) – Spans from 21 (s=1.2*21/9=2.8) to 45 (s=1.2*45/9=6) (some overlap with first octave) – Ok to measure at every other pixel in image (saves computation, like down sampling) • Third octave sizes: 27, 51, 75, 99 – Increase by 24 each time – Spans from 39 (s=1.2*39/9=5.2) to 87 (s=1.2*87/9=11.6)
  • 16. Attendance Using Facial Recognition Page 16 – Ok to measure at every 4th pixel in image Figure 1.4: Speed-up robust features example Figure 1.5: SURF overview 2 4 9 15 21 27 15 27 39 51 27 51 75 99 1 8 Scale Octaves 9 1 5 2 1 2 7 3 3 3 9 4 5 5 1 5 7 6 3 6 9 7 5 8 1 8 7 9 3 9 9 =1.2 * (12/9) = 1.6 =1.2 * (24/9) = 3.2 =1.2 * (21/9) = 2.8 =1.2 * (45/9) = 6.0 =1.2 * (39/9) = 5.2 =1.2 * (87/9) = 11.6 1.6 ≤ ≤ 3.2 2.8 ≤ ≤ 6.0 5.2 ≤ ≤ 11.6
  • 17. Attendance Using Facial Recognition Page 17 1.4 METHODOLOGY Figure 1.6: The basic face detection methodology and classification. The face detection is basically categories in two types. first one is feature base and second one is image base. The feature base is subdivided in low level analysis and feature analysis. The low level are again subdivided in two types skin color and edge detection. Feature analysis is subdivided into three types LBP, viola jones and gabor feature. The image base is divided into two types neural network and statistical approach. The statistical approach is again sub divided into DTC, PCA, and SVM. In this paper we are demonstrating the combination of local binary pattern(LBP) and decision tree classifier(DTC) which are explained below in the section 1.4.1 and 1.4.2. FACE DETECTION FEATURE BASE IMAGE BASE FEATURE ANALYSIS LOW LEVEL ANALYSIS EDGE DETECTION SKIN COLOR GABOR FEATURE VIOLA JONES NEURAL NETWORKS PCA SVM DTCLBP STASTISTICAL APPROACH
  • 18. Attendance Using Facial Recognition Page 18 1.4.1 Local Binary Patterns Local binary patterns (LBP) were first introduced by Ojala et al [9] and it describe about scale texture descriptor. The figure 1.7 shows the basic operation of the local binary pattern. First consider a image having in the form of 3*3 matrix in which central pixel is consider as a reference to their corresponding neighbour pixels. In general setting, a LBP operator assigns a decimal number to a pair (𝑔, 𝑐𝑖) ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡ 𝑅 =⁡∑ 2 𝑙−1 ⁡𝐼(𝑔, 𝑐𝑖)𝑠 𝑙=1 ---------------------(19) Whereas g represents the middle pixels, 𝑐 = (𝑐1 … … . . 𝑐 𝑛) corresponds to a collection of pixels sampled from neighborhood of g. 𝐼( 𝑔, 𝑐𝑖) =⁡{ 1⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑖𝑓⁡𝑔 < 𝑐𝑖 0⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 ----------(20) Wang has used both local binary pattern (LBP) with Histogram of Oriented Gradients (HOG) descriptor to improve detection performance in his paper [10]. In the figure 1.7 the 4 is taken as central pixel by using this central pixel value we can find its neighbourhood pixel values [11]. If the value of central pixel is greater than its neighbourhood pixel then it is consider as one (1) otherwise if the central pixel is smaller than its neighbourhood pixel then it is consider as zero (0). Figure 1.7: local binary pattern (LBP) operator example. 6 1 8 2 4 7 5 9 3 1 0 1 0 1 1 1 0 Threshold
  • 19. Attendance Using Facial Recognition Page 19 L=12 M=2.5 L=16 M=4 Let us consider the clockwise scenario. The neighbourhood pixel 6 in figure 1.7 is greater than central pixel 4 so it is considering as one. Similarly, the neighbourhood pixel 1 is smaller than central pixel 4 so it is considering as zero. Now again neighbourhood pixel 8 is greater than central pixel 4 so it is considering as one and so on in a clockwise direction. Figure 1.8: LBP Circular neighbor with different values of L, M. In the figure 1.8 the black and blue are representing the values of L and M. The M is the radius of circle which is consider as 1, 2.5 and 4 for three different scenarios. The L is the neighbourhood pixel which is consider as 8, 12 and 16 for three different scenarios. The process of finding the value neighbourhood pixel is same as explained in figure 1.7. L=8 M=1
  • 20. Attendance Using Facial Recognition Page 20 1.4.2 Decision Tree classifier Decision Trees classifier (DTC) represented by a flowchart like tree structure was introduced by J. R. Quinlan in 1986 [13]. As the name suggest decision tree algorithm have tree structure module and used for pattern recognition and classification [14]. Breiman has introduced the Classification and Regression Tree (CART) algorithm [15]. The decision trees are logic flow and mainly used in discrete value classifier which is its main advantages but on the other hand it has also some disadvantages which over sensitivity and irrelevant data with noise [16]. The decision tree is a learning algorithm as long as we provide different input and train it they perform very well . The Gain ratio is calculated for the training set attributes for all the features. It is a tree structure as shown in figure 1.9 example in which it has three main parts. The first one is root the second one is subset and the last is leaf node. The root work is to collect different data given by the user. Then the data is trained in the subset section. In subset section they are trained in same attribute or may have different attribute. The leaf section is created by repeating the section one and section two. Information Entropy is defined as: ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡𝛽(𝑇) = − ∑ 𝐺𝑗 log2 𝐺𝑗 𝑟 𝑗=0 ------------------------------------------(21) Where as, T is the test data . m is the sample set. I = 1,2,3……r 𝐺𝑗 𝐺𝑗𝑖⁡are the proportions: 𝐺𝑗 = 𝑚𝑗/[𝑇] 𝐺𝑗𝑖 = 𝑚𝑗𝑖/[𝑇𝑗]
  • 21. Attendance Using Facial Recognition Page 21 Figure 1.9: decision tree classifier basic structure. NODE 4 NODE 1 NODE 2 NODE 3 Root of tree I J K subset Leaf node
  • 22. Attendance Using Facial Recognition Page 22 1.4.3 Technical Requirements In this paper hardware and software play a major role. Regarding hardware, a standard computer needs to be installed and placed in the school office room where student is entered. Camera must be positioned in the office room to obtain the video with 25 fps and resolution 512 by 512 pixels. Secondary memory s needed to store all the images and video database. Software requirements MATLAB Version R2018a and Windows 10 is used with i3processor speed with quad-core 3.33 GHz CPU and 2GB RAM.
  • 23. Attendance Using Facial Recognition Page 23 1.5 RESULT AND DISCUSSION The below figure1.8 shows the GUI which is built in matlab2018a version for attendance using face recognition system. The first step is to input a proper supported format video with carefully chosen frame rate of video. After that it will automatically detect face in video and store that face in a folder called people. Once this step is over we need to go to count button. The count is basically count the number of faces or person by using decision tree classifier. The last step is to identify for that we need to go people identify button. When we press the people identify button it will match the identity of faces with stored database and show the result according to that. If the person is valid then their name, address will be shown if the person is not valid then message will be prompt person is not valid. Figure 1.10: GUI for Attendance using facial Recognition System.
  • 24. Attendance Using Facial Recognition Page 24 The below figure1.10 shows the database which is manually created by changing in matlab2018a version command program. The face image is first stored in the folder called as data. In the data folder total 76 face is stored and their name and address is manually input by the program developer. The face image must be in RGB colour and JPG file format with sequence numbers. The face used in this database is usually collected from different students with mobile camera. The input sample video used in this project is download from internet. Figure 1.11: Database created by input video. The first step of project is to input source video file from database. The video file input will be from the database, it does not support input directly with a webcam. If video file is loading from the database, the figure 1.11 functionality is shown. The “uigetfile” on line 84 is an algorithm in MATLAB that will load video file of MP4, MOV type format is supported and the “strcat” on line 86 adds the filename that will read the video using the Matlab 2018a version “VideoReader” command is used on line 90 to read the video in avi file format.
  • 25. Attendance Using Facial Recognition Page 25 Figure 1.12: Input video from static source Folder/Database. After video read by matlab in a sequence of n number of frame the vision.CascadeObjectDetector(); command in line no 95 is used for detect the face in an each frame. The delete('.People*.jpg'); is used in line 101 for the purpose of deleting the old data. Then by using for loop in command line no 102 input video is read and display on axis no 1. The detected face is drown a border line and shown to users. In line no 111 “imresize” command is used to resize the detected face in 128 * 128 format. The imwrite(im2,fn); is used to write the image in jpg format in people folder.
  • 26. Attendance Using Facial Recognition Page 26 Figure 1.13: Count total no of people faces in video. The count is the second steps where the face data created in folder people in first step is going to read all faces one by one in a sequence manner. The dn1=strcat(dn,'*.jpg'); command is used to read face image from people folder. The for loop is used in line no 146 in which image is read and shown in axis no2 after reading lbp_sir(im); command is use in line 152 which basically a local binary pattern function. The axis 3 represent local binary pattern result. The NetTree=[]; command is used for decision tree classification algorithm.
  • 27. Attendance Using Facial Recognition Page 27 Figure 1.14: People identify. The people identify callback has been embedded by the same gui. However, because the requirement requested to display the input image side by side with the matched corresponding detail like address , name. Figure 1.15: Restart the program. The command line no 416-418 is used for restarting the program and the command clc used for clear the command window the clear all function is to clear the history and close all will close all currently running program which is also act as exit button as shown in figure 1.15 Figure 1.16: Exit the GUI.
  • 28. Attendance Using Facial Recognition Page 28 Figure 1.17: Decision tree classification result.
  • 29. Attendance Using Facial Recognition Page 29 1.6 CONCLUSION In this paper various analysis of face detection, classification and feature extraction is discussing and closely examine through database taken from record video and uses function from system identification to detect faces. The type of examination conduct such as local binary pattern and decision tree classification. The graphical user interface is created using matlab2018a version software. The graphical user interface takes input video through user and detect faces then count no of people present in the video after that it show result according to database created by the user and examine the result. Experimental results closely examine and show that the technique used in this paper successfully identifies face of person and matches with the database. In the future work we can combine more feature extraction algorithm with same classifier to do more research. By using different methods, we can achieve more accurate and sophisticate result. In order to implement new technique and method more time is required. In future work the parameter is used in finding local binary pattern of image can be improving and change. The graphical user interface is built in matlab and can be easily modify so in future we convert this graphical user interface in a app builder in standalone form so it is more secure and difficult to modify. Also user id and password section can be implement in GUI which create additional security in this system. The database is created manually by the organization staffs so it very important regarding data confidentiality. It is very important to inform the persons that their face is used for the purpose of face attendance system. In this project face data is created by persons face for experimental purpose but the real scenario will be totally different. The figure 1.18 shows the gantt chat of the project. Basically this is implement in seven phases which shows in gantt chat. The actual time taken to complete the project is representing by orange color bar and the estimate time is representing by blue colour. The initial planning is made in the month of December which include study of various algorithm related to face recognition. The next section in GUI design which is develop in January month which include installation of matlab software and basic learning to make a GUI. Data base creation involve collection of face of various students. The algorithm implementation is the most difficult part of the project which include advance study of matlab code and implement of proper algorithm. After implementation connection is made between algorithm and GUI. Code optimization is very necessary for every project because it reduce the
  • 30. Attendance Using Facial Recognition Page 30 execution time of project and also remove some error if exist. Report preparation is the final stage of project. Figure 1.18: Gantt chat. 25 March22 February15 January12 December Estimate completion dates actual completion dates Initial planning GUI design Database creation Algorithm implement (LBP & DTC) Establish connection between GUI and Algorithm Code optimization Report preparation
  • 31. Attendance Using Facial Recognition Page 31 References [1]. Kyungnam Kim “Face Recognition using Principle Component Analysis”, Department of Computer Science, University of Maryland, College Park, MD 20742, USA.. [2]. H.K.Ekenel and R.Stiefelhagen,Analysis of local appearance based face recognition: Effe cts of feature selection and feature normalization. In CVPR Biometrics Workshop, New York, USA, 2006. [3] Bhumika G. Bhatt, Zankhana H. Shah “Face Feature Extraction Techniques: A Survey”, National Conference on Recent Trends in Engineering & Technology, 13-14 May 2011.. [4]. Dr. Nita Thakare , Meghna Shrivastava , Nidhi Kumari ,Neha Kumari , Darleen Kaur , Rinku Singh .”Face Detection And Recognition For Automatic Attendance System.” International Journal of Computer Science and Mobile Computing, Vol.5 Issue.4,. [5]. Cohen L (1989) Time–frequency distributions a review. Proc IEEE 77(7):941–981. [6]. D. Lowe, “Distince image features from scale-invariant keypoints,” Int. Journal of Computer Vision, vol.60, no.2, pp.91-110, 2004 [7]. M. Bicego, A. Lagorio, E. Grosso, and M. Tistarelli, “On the use of SIFT features for face authentication,” Proc. of IEEE Int Workshop on Biometrics, in association with CVPR, NY, 2006 [8]. ] K. Okajima, “Two-dimensional Gabor-type receptive field as derived by mutual information maximization,” Neural Networks, vol. 11, no. 3, pp. 441–447, 1998. [9] Ojala, T., Pietik•ainen, M., Harwood, D.: A comparative study of texture measures with classication based on featured distributions. Pattern Recognition 29 (1996) 51-59.
  • 32. Attendance Using Facial Recognition Page 32 [10] X. Wang, T. X. Han, and S. Yan, “An HOG-LBP human detector with partial occlusion handling,” in 2009 IEEE 12th International Conference on Computer Vision, pp. 32–39.. [11]. “Local Binary Patterns: New Variants and Applications.”,” IEEE Transactions, Vol. 61, No. 4, pp. 990 – 1001, 2012. [12]. J.G. Daugman: Uncertainty relations for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters, Journal of the Optical Society of America A, vol. 2, pp. 1160-1169,1985. [13] J. R. Quinlan, “Induction of decision trees,” vol. 1, no. 1, pp. 81–106.. [14] Automatic Design of Decision-Tree Induction Algorithms — Rodrigo C. Barros — Springer. [15] L. Breiman, J. Friedman, C. J. Stone, and R. Olshen. Classification and Regression Trees. . [16] L. Rokach and O. Maimon, “Top-down induction of decision trees classifiers - a survey,” vol. 35, no. 4, pp. 476–487. [17]. G. Donato, M.S. Bartlett, J.C. Hager, “Classifying Facial Actions”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 21, 1999, pp.974-989 . [18]. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, SURF: Speeded Up Robust Features (SURF), Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346–359, 2008 [19]. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, SURF: Speeded Up Robust Features (SURF), Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346–359, 2008 [20]. Wang, Y., Ai, H., Wu, B., & Huang, C. (2004, August). Real time facial expression recognition with adaboost. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. (Vol. 3, pp. 926-929). IEEE.
  • 33. Attendance Using Facial Recognition Page 33 [21]. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, SURF: Speeded Up Robust Features (SURF), Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346–359, 2008 [22]. Ryu, H., Chun, S. S. and Sull, S. (2006) 'Multiple classifiers approach for computational efficiency in multi-scale search based face detection', Advances in Natural Computation, Pt 1, 4221 pp. 483-492. [23] Rowley, H. A., Baluja, S. and Kanade, T. (1998) 'Neural network-based face detection', Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20 (1), pp. 23-38. [24] Hjelmås, E. and Low, B. K. (2001) 'Face Detection: A Survey', Computer Vision and Image Understanding, 83 (3), pp. 236-274. [25] Viola, P. and Jones, M. (2001) 'Rapid object detection using a boosted cascade of simple features', Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1 pp. I511-I518. [26] Wang, K., Song, Z., Sheng, M., He, P. and Tang, Z. (2015) 'Modular Real-Time Face Detection System', Annals of Data Science, 2 (3), pp. 317-333. [27] Thai, L. H., Nguyen, N. D. T. and Hai, T. S. (2011) 'A Facial Expression Classification System Integrating Canny, Principal Component Analysis and Artificial Neural Network', International Journal of Machine Learning and Computing, 1 (4), pp. 388-393.