This document provides an introduction to pattern recognition and classification. It discusses key concepts such as patterns, features, classes, supervised vs. unsupervised learning, and classification vs. clustering. Examples of pattern recognition applications are given such as handwriting recognition, license plate recognition, and medical imaging. The main phases of developing a pattern recognition system are outlined as data collection, feature choice, model choice, training, evaluation, and considering computational complexity. Finally, some relevant basics of linear algebra are reviewed.
2. Pattern Recognition/Classification
• Assign an object or an event (pattern) to one of several known
categories (or classes).
2
Category “A”
Category “B”
100 objects
• Height
• Width
• Purpose of object
Properties of objects
Will there be raining on
15 Jan 2022?
1. Previous years data
2. You need to create
a model
3. Predict the answer
4. What is a Pattern?
• A pattern could be an object or
event.
• Typically, represented by a vector x
of numbers.
4
biometric patterns hand gesture patterns
1
2
.
.
n
x
x
x
x
Regularities in data
Automatically
Discovering
regularities in data
1. Collect the data
(Image, voice,
transactional
data)
2. Store it in some
data structure
{vectors, matrices}
3. Applying some
algos.
Etc etc
5. What is a Pattern? (con’t)
• Loan/Credit card applications
• Income, # of dependents, mortgage amount credit
worthiness classification
• Dating services
• Age, hobbies, income “desirability” classification
• Web documents
• Key-word based descriptions (e.g., documents
containing “football”, “NFL”) document classification
5
6. What is a Class ?
• A collection of “similar” objects.
6
1. Classification
2. Clustering (we
don’t have labels)
On the basis of Labels
Relation between the
data and what
happened earlier
with that data.
Initially on the basis of just similarity/dissimilarity, we group
female male
7. Main Objectives
• Separate the data belonging to different classes.
• Given new data, assign them to the correct
category.
7
Gender Classification
Features, attributes,
properties
e.g.
F: {length of hair(lh), glasses
(G), facial structure(fs)}
F: {lh, G, fs}
L; {“male”, ”female”}
Mapping function : phi
M
M
M
M
F
F
F
F
Group 1
Group 2
F
M
Optimization
Y = m1x1+(m2x2)2+ m3x3………
0, 0 Height
Weight
Females
males
Fundamental mathematics
Equation of line: y = mx+c
• Linear as well non-linear curves
[5.7, 64,23]
[5.5, 61,25]
9. Main Approaches
x: input vector (pattern)
ω: class label (class)
• Generative
– Model the joint probability, p(x, ω).
– Make predictions by using Bayes rule to calculate p(ω/x).
– Pick the most likely class label ω.
• Discriminative
– No need to model p(x, ω).
– Estimate p(ω/x) by “learning” a direct mapping from x to ω (i.e.,
estimate decision boundary).
– Pick the most likely class label ω.
9
ω1
ω2
How can we define the
relationship b/w
labelled data using
probability???
x1w1
X2w2
…
…
…
Xn-->wn
X_unk????
Suppose we are having
a total of 60k samples
Out of which 4k
samples with labelled
If I am
having
images of
• Table
• chairs
11. How do we model p(x, ω)?
• Typically, using a statistical model.
• probability density function (e.g., Gaussian)
11
Gender Classification
male
female
P(x, w) = joint probability of sample x
and class w
1. Calculate the probability of
sample S5 coming in class w=1
2. Calculate the probability of
sample S5 coming in class w=0
S5
W=1
W=0
S4
P(S1, w0)
P(S1, w1)
P(S2, w0)
P(S2, w1)
…..
…..
…
12. Key Challenges
• Intra-class variability
• Inter-class variability
12
Letters/Numbers that look similar
The letter “T” in different typefaces
22. 22
The Design Cycle
• Data collection
• Feature Choice
• Model Choice
• Training
• Evaluation
• Computational Complexity
23. Agent Environment
1. Digital
2. Continuous
When developing a ML model
Quality of data is most important thing to
consider at first
Height
1. Feature selection
2. Feature Extraction
Understanding/Interpreting
collected data
If I am having 50 features
In feature selection, we are going
to select
- Actual values of features
won’t changes
Feature extraction:
Actual values changes ,
Sample 1 = [59, 5.6]
Sample 2 = [68, 5.9]
X = [Sample1, Sample2, ….., so
on]
24. Feature 1
(f1)
Feature 2
(f2)
Feature3
(f3)
…. Feature n
(784)
Labels
27 143 54 108 Car
10 59 20 30 Car
… …. …. … House
28
28
28*28 = 784
1. Read your data from database
2. Store in form of matrix
Where each row is your 1 image
Img 1
Img1 : [27, 143, 2*27……3*27]
Img2: [10, 59, 2*10….3*10]
Data is redundant
Data is correlated
F is a feature set where we have {f1,
f2… f784}
f1, f3 and f784 are correlated
Discard correlated feature
SFS/SBS or SFFS
25. 1. Feature selection
In case of feature selection the values of feature will not change e.g. if you have chosen f1 and f3 for further
processing then
Img1: {27, 54}
Img2 : {10, 20}
2. Feature extraction : The values of features will be different (What will be the different values?)
PCA, SVD etc
1. Less computation time
2. Higher accuracy (This is not necessary all the time)
Hughes Phenomenon
26. • Data Collection
• How do we know when we have collected an adequately large and
representative set of examples for training and testing the system?
Working area : Greater Noida (population suppose 1 million)
Objective: find the buyer of a particular product (e.g. college bag)
We, in many case, can’t collect the data of whole population
Collect the samples from the population (there are different sampling methods
available for it)
The samples should represent actual population
Statistical Analysis
1. Univariate (mean, median
etc)
2. Multivariate (scatter plot
27. • Feature Choice
• Depends on the characteristics of the problem domain. Simple to extract,
invariant to irrelevant transformation insensitive to noise.
28. • Model Choice
• Unsatisfied with the performance of our fish classifier and want to jump to
another class of model
100s of available choices
Selection should be based on the characteristics of your dataset
ANN
SVM
Decision Trees
29. • Training
• Use data to determine the classifier. Many different procedures for training
classifiers and choosing models
30. • Evaluation
• Measure the error rate (or performance and switch from one set of features
to another one
Confusion matrix based measures
31. • Computational Complexity
• What is the trade-off between computational ease and performance?
• (How an algorithm scales as a function of the number of features, patterns
or categories?)
#computational steps are in millions
33. n-dimensional Vector
• An n-dimensional vector v is denoted as follows:
• The transpose vT is denoted as follows:
34. Inner (or dot) product
• Given vT = (x1, x2, . . . , xn) and wT = (y1, y2, . . . , yn),
their dot product defined as follows:
or
(scalar)
35. Orthogonal / Orthonormal vectors
• A set of vectors x1, x2, . . . , xn is orthogonal if
• A set of vectors x1, x2, . . . , xn is orthonormal if
k
36. Linear combinations
• A vector v is a linear combination of the
vectors v1, ..., vk if:
where c1, ..., ck are constants.
• Example: vectors in R3 can be expressed as a
linear combinations of unit vectors i
= (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1)
37. Space spanning
• A set of vectors S=(v1, v2, . . . , vk ) span some space
W if every vector w in W can be written as a linear
combination of the vectors in S
- The unit vectors i, j, and k span R3
w
38. Linear dependence
• A set of vectors v1, ..., vk are linearly dependent if at least one of them
is a linear combination of the others:
(i.e., vj does not appear on the right side)
39. Linear independence
• A set of vectors v1, ..., vk is linearly independent if no
vector vj can be represented as a linear combination of
the remaining vectors, i.e.,:
Example:
c1=c2=0
40. Vector basis
• A set of vectors v1, ..., vk forms a basis in some
vector space W if:
(1) (v1, ..., vk) span W
(2) (v1, ..., vk) are linearly independent
• Standard bases:
R2 R3 Rn
41. Matrix Operations
• Matrix addition/subtraction
• Add/Subtract corresponding elements.
• Matrices must be of same size.
• Matrix multiplication
Condition: n = q
m x n q x p m x p
n
45. Determinants
2 x 2
3 x 3
n x n
Properties:
(expanded along 1st column)
(expanded along kth column)
46. Matrix Inverse
• The inverse of a matrix A, denoted as A-1, has the
property:
A A-1 = A-1A = I
• A-1 exists only if
• Definitions
• Singular matrix: A-1 does not exist
• Ill-conditioned matrix: A is “close” to being singular
49. Rank of matrix
• Defined as the dimension of the largest square sub-matrix of A that
has a non-zero determinant.
Example: has rank 3
50. Rank of matrix (cont’d)
• Alternatively, it is defined as the maximum
number of linearly independent columns (or rows)
of A.
i.e., rank is not 4!
Example:
52. Eigenvalues and Eigenvectors
• The vector v is an eigenvector of matrix A and λ is an eigenvalue of A
if:
Geometric interpretation: the linear transformation implied by A can
not change the direction of the eigenvectors v, only their magnitude.
(assume v is non-zero)
53. Computing λ and v
• To compute the eigenvalues λ of a matrix A, find the
roots of the characteristic polynomial.
• The eigenvectors can then be computed:
Example:
54. Properties of λ and v
• Eigenvalues and eigenvectors are only defined for
square matrices.
• Eigenvectors are not unique (e.g., if v is an
eigenvector, so is kv)
• Suppose λ1, λ2, ..., λn are the eigenvalues of A,
then:
55. Matrix diagonalization
• Given an n x n matrix A, find P such that:
P-1AP=Λ where Λ is diagonal
• Solution: Set P = [v1 v2 . . . vn], where v1,v2 ,. . . vn are
the eigenvectors of A:
57. • If A is diagonalizable, then the corresponding
eigenvectors v1,v2 ,. . . vn form a basis in Rn
• If A is also symmetric, its eigenvalues are real and
the corresponding eigenvectors are orthogonal.
Matrix diagonalization (cont’d)
58. • An n x n matrix A is diagonalizable iff rank(P)=n, that
is, it has n linearly independent eigenvectors.
• Theorem: If the eigenvalues of A are all distinct, then
the corresponding eigenvectors are linearly
independent (i.e., A is diagonalizable).
Are all n x n matrices diagonalizable?
60. Matrix decomposition (cont’d)
• Matrix decomposition can be simplified in
the case of symmetric matrices (i.e.,
orthogonal eigenvectors):
P-1=PT
A=PDPT=
65. Main Phases
65
Training Phase
Test Phase
Classification
(thematic values)
Or
Regression
(continuous values)
50,000 image
You have divided 35k
for training and 15 k
for testing
Validation data
Step 1
Step 2
Step 3
Step 1
Step 2
Step 3
Additional
step
Score (Any distance function) 1: 0.75
66. Complexity of PR – An Example
66
Problem: Sorting
incoming fish on a
conveyor belt.
Assumption: Two
kind of fish:
(1) sea bass
(2) salmon
camera
67. Sensors
• Sensing:
• Use a sensor (camera or microphone) for data capture.
• PR depends on bandwidth, resolution, sensitivity,
distortion of the sensor.
67
69. Training/Test data
• How do we know that we have collected an
adequately large and representative set of
examples for training/testing the system?
69
Training Set ?
Test Set ?
70. Feature Extraction
• How to choose a good set of features?
• Discriminative features
• Invariant features (e.g., invariant to geometric
transformations such as translation, rotation and scale)
• Are there ways to automatically learn which features
are best ?
70
71. Feature Extraction - Example
• Let’s consider the fish
classification example:
• Assume a fisherman told us that
a sea bass is generally longer
than a salmon.
• We can use length as a feature
and decide between sea bass
and salmon according to a
threshold on length.
• How should we choose the
threshold?
71
72. Feature Extraction - Length
• Even though sea bass is longer than salmon on
the average, there are many examples of fish
where this observation does not hold.
72
threshold l*
Histogram of “length”
73. Feature Extraction - Lightness
• Consider different features, e.g., “lightness”
• It seems easier to choose the threshold x* but we still
cannot make a perfect decision.
73
threshold x*
Histogram of “lightness”
74. Multiple Features
• To improve recognition accuracy, we might need to
use more than one features.
• Single features might not yield the best performance.
• Using combinations of features might yield better
performance.
1
2
x
x
1
2
:
:
x lightness
x width
74
75. How Many Features?
• Does adding more features always improve
performance?
• It might be difficult and computationally expensive to
extract certain features.
• Correlated features might not improve performance (i.e.
redundancy).
• “Curse” of dimensionality.
75
76. Curse of Dimensionality
• Adding too many features can, paradoxically, lead to a
worsening of performance.
• Divide each of the input features into a number of intervals, so
that the value of a feature can be specified approximately by
saying in which interval it lies.
• If each input feature is divided into M divisions, then the total
number of cells is Md (d: # of features).
• Since each cell must contain at least one point, the number of
training data grows exponentially with d.
76
77. Missing Features
• Certain features might be missing (e.g., due to
occlusion).
• How should we train the classifier with missing
features ?
• How should the classifier make the best decision
with missing features ?
77
78. Classification
• Partition the feature space into two regions by finding
the decision boundary that minimizes the error.
• How should we find the optimal decision boundary?
78
79. Complexity of Classification Model
• We can get perfect classification performance on the
training data by choosing a more complex model.
• Complex models are tuned to the particular training
samples, rather than on the characteristics of the true
model.
79
How well can the model generalize to unknown samples?
overfitting
80. Generalization
• Generalization is defined as the ability of a classifier to
produce correct results on novel patterns.
• How can we improve generalization performance ?
• More training examples (i.e., better model estimates).
• Simpler models usually yield better performance.
80
complex model simpler model
81. Understanding model complexity:
function approximation
81
• Approximate a function from a set of samples
o Green curve is the true function
o Ten sample points are shown by the blue circles
(assuming noise)
82. Understanding model complexity:
function approximation (cont’d)
82
Polynomial curve fitting: polynomials having various
orders, shown as red curves, fitted to the set of 10
sample points.
83. Understanding model complexity:
function approximation (cont’d)
83
• More data can improve model estimation
• Polynomial curve fitting: 9’th order polynomials fitted
to 15 and 100 sample points.
84. Improve Classification Performance through Post-
processing
• Consider the problem of character recognition.
• Exploit context to improve performance.
84
How m ch info mation are y u
mi sing?
85. Improve Classification Performance through
Ensembles of Classifiers
• Performance can be
improved using a "pool" of
classifiers.
• How should we build and
combine different
classifiers ?
85
86. Cost of miss-classifications
• Consider the fish classification example.
• There are two possible classification errors:
(1) Deciding the fish was a sea bass when it was a
salmon.
(2) Deciding the fish was a salmon when it was a sea
bass.
• Are both errors equally important ?
86
87. Cost of miss-classifications (cont’d)
• Suppose that:
• Customers who buy salmon will object vigorously if
they see sea bass in their cans.
• Customers who buy sea bass will not be unhappy if
they occasionally see some expensive salmon in their
cans.
• How does this knowledge affect our decision?
87
88. Computational Complexity
• How does an algorithm scale with the number of:
• features
• patterns
• categories
• Need to consider tradeoffs between computational
complexity and performance.
88
89. Would it be possible to build a “general purpose”
PR system?
• It would be very difficult to design a system that is
capable of performing a variety of classification
tasks.
• Different problems require different features.
• Different features might yield different solutions.
• Different tradeoffs exist for different problems.
89