Data Mining Lecture_10(b).pptx

Subrata Kumer Paul
Subrata Kumer PaulAssistant Professor, Dept. of CSE, BAUET à Subrata Kumer Paul
DATA MINING
LECTURE 10B
Classification
k-nearest neighbor classifier
Naïve Bayes
Logistic Regression
Support Vector Machines
Subrata Kumer Paul
Assistant Professor, Dept. of CSE, BAUET
sksubrata96@gmail.com
NEAREST NEIGHBOR
CLASSIFICATION
Illustrating Classification Task
Apply
Model
Induction
Deduction
Learn
Model
Model
Tid Attrib1 Attrib2 Attrib3 Class
1 Yes Large 125K No
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No
8 No Small 85K Yes
9 No Medium 75K No
10 No Small 90K Yes
10
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
14 No Small 95K ?
15 No Large 67K ?
10
Test Set
Learning
algorithm
Training Set
Instance-Based Classifiers
Atr1 ……... AtrN Class
A
B
B
C
A
C
B
Set of Stored Cases
Atr1 ……... AtrN
Unseen Case
• Store the training records
• Use training records to
predict the class label of
unseen cases
Instance Based Classifiers
• Examples:
• Rote-learner
• Memorizes entire training data and performs classification only if
attributes of record match one of the training examples exactly
• Nearest neighbor classifier
• Uses k “closest” points (nearest neighbors) for performing
classification
Nearest Neighbor Classifiers
• Basic idea:
• “If it walks like a duck, quacks like a duck, then it’s
probably a duck”
Training
Records
Test
Record
Compute
Distance
Choose k of the
“nearest” records
Nearest-Neighbor Classifiers
 Requires three things
– The set of stored records
– Distance Metric to compute
distance between records
– The value of k, the number of
nearest neighbors to retrieve
 To classify an unknown record:
1. Compute distance to other
training records
2. Identify k nearest neighbors
3. Use class labels of nearest
neighbors to determine the
class label of unknown
record (e.g., by taking
majority vote)
Unknown record
Definition of Nearest Neighbor
X X X
(a) 1-nearest neighbor (b) 2-nearest neighbor (c) 3-nearest neighbor
K-nearest neighbors of a record x are data points
that have the k smallest distance to x
1 nearest-neighbor
Voronoi Diagram defines the classification boundary
The area takes the
class of the green
point
Nearest Neighbor Classification
• Compute distance between two points:
• Euclidean distance
• Determine the class from nearest neighbor list
• take the majority vote of class labels among the k-
nearest neighbors
• Weigh the vote according to distance
• weight factor, w = 1/d2
 
 i i
i
q
p
q
p
d 2
)
(
)
,
(
Nearest Neighbor Classification…
• Choosing the value of k:
• If k is too small, sensitive to noise points
• If k is too large, neighborhood may include points from
other classes
X
Nearest Neighbor Classification…
• Scaling issues
• Attributes may have to be scaled to prevent distance
measures from being dominated by one of the attributes
• Example:
• height of a person may vary from 1.5m to 1.8m
• weight of a person may vary from 90lb to 300lb
• income of a person may vary from $10K to $1M
Nearest Neighbor Classification…
• Problem with Euclidean measure:
• High dimensional data
• curse of dimensionality
• Can produce counter-intuitive results
1 1 1 1 1 1 1 1 1 1 1 0
0 1 1 1 1 1 1 1 1 1 1 1
1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1
vs
d = 1.4142 d = 1.4142
 Solution: Normalize the vectors to unit length
Nearest neighbor Classification…
• k-NN classifiers are lazy learners
• It does not build models explicitly
• Unlike eager learners such as decision trees
• Classifying unknown records are relatively
expensive
• Naïve algorithm: O(n)
• Need for structures to retrieve nearest neighbors fast.
• The Nearest Neighbor Search problem.
Nearest Neighbor Search
• Two-dimensional kd-trees
• A data structure for answering nearest neighbor queries
in R2
• kd-tree construction algorithm
• Select the x or y dimension (alternating between the
two)
• Partition the space into two with a line passing from the
median point
• Repeat recursively in the two partitions as long as there
are enough points
2-dimensional kd-trees
Nearest Neighbor Search
2-dimensional kd-trees
Nearest Neighbor Search
2-dimensional kd-trees
Nearest Neighbor Search
2-dimensional kd-trees
Nearest Neighbor Search
2-dimensional kd-trees
Nearest Neighbor Search
2-dimensional kd-trees
Nearest Neighbor Search
region(u) – all the black points in the subtree of u
2-dimensional kd-trees
Nearest Neighbor Search
 A binary tree:
 Size O(n)
 Depth O(logn)
 Construction time O(nlogn)
 Query time: worst case O(n), but for many cases O(logn)
Generalizes to d dimensions
 Example of Binary Space Partitioning
2-dimensional kd-trees
Nearest Neighbor Search
SUPPORT VECTOR
MACHINES
Support Vector Machines
• Find a linear hyperplane (decision boundary) that will separate the data
Support Vector Machines
• One Possible Solution
B1
Support Vector Machines
• Another possible solution
B2
Support Vector Machines
• Other possible solutions
B2
Support Vector Machines
• Which one is better? B1 or B2?
• How do you define better?
B1
B2
Support Vector Machines
• Find hyperplane maximizes the margin => B1 is better than B2
B1
B2
b11
b12
b21
b22
margin
Support Vector Machines
B1
b11
b12
0


 b
x
w


1



 b
x
w

 1



 b
x
w














1
b
x
w
if
1
1
b
x
w
if
1
)
( 




x
f
||
||
2
Margin
w


Support Vector Machines
• We want to maximize:
• Which is equivalent to minimizing:
• But subjected to the following constraints:
• This is a constrained optimization problem
• Numerical approaches to solve it (e.g., quadratic programming)
2
||
||
2
Margin
w


2
||
||
)
(
2
w
w
L


𝑤 ∙ 𝑥𝑖 + 𝑏 ≥ 1 if 𝑦𝑖 = 1
𝑤 ∙ 𝑥𝑖 + 𝑏 ≤ −1 if 𝑦𝑖 = −1
Support Vector Machines
• What if the problem is not linearly separable?
Support Vector Machines
• What if the problem is not linearly separable?
𝜉𝑖
𝑤
Support Vector Machines
• What if the problem is not linearly separable?
• Introduce slack variables
• Need to minimize:
• Subject to:







 

N
i
k
i
C
w
w
L
1
2
2
||
||
)
( 

𝑤 ∙ 𝑥𝑖 + 𝑏 ≥ 1 − 𝜉𝑖 if 𝑦𝑖 = 1
𝑤 ∙ 𝑥𝑖 + 𝑏 ≤ −1 + 𝜉𝑖if 𝑦𝑖 = −1
Nonlinear Support Vector Machines
• What if decision boundary is not linear?
Nonlinear Support Vector Machines
• Transform data into higher dimensional space
Use the Kernel Trick
LOGISTIC REGRESSION
Classification via regression
• Instead of predicting the class of an record we
want to predict the probability of the class given
the record
• The problem of predicting continuous values is
called regression problem
• General approach: find a continuous function that
models the continuous points.
Example: Linear regression
• Given a dataset of the
form (𝑥1, 𝑦1) , … , (𝑥𝑛, 𝑦𝑛)
find a linear function that
given the vector 𝑥𝑖
predicts the 𝑦𝑖 value as
𝑦𝑖
′
= 𝑤𝑇𝑥𝑖
• Find a vector of weights 𝑤
that minimizes the sum of
square errors
𝑖
𝑦𝑖
′
− 𝑦𝑖
2
• Several techniques for
solving the problem.
Classification via regression
• Assume a linear classification boundary
𝑤 ⋅ 𝑥 = 0
𝑤 ⋅ 𝑥 > 0
𝑤 ⋅ 𝑥 < 0
For the positive class the bigger
the value of 𝑤 ⋅ 𝑥, the further the
point is from the classification
boundary, the higher our certainty
for the membership to the positive
class
• Define 𝑃(𝐶+|𝑥) as an increasing
function of 𝑤 ⋅ 𝑥
For the negative class the smaller
the value of 𝑤 ⋅ 𝑥, the further the
point is from the classification
boundary, the higher our certainty
for the membership to the negative
class
• Define 𝑃(𝐶−|𝑥) as a decreasing
function of 𝑤 ⋅ 𝑥
Logistic Regression
𝑓 𝑡 =
1
1 + 𝑒−𝑡
𝑃 𝐶+ 𝑥 =
1
1 + 𝑒−𝑤⋅𝑥
𝑃 𝐶− 𝑥 =
𝑒−𝑤⋅𝑥
1 + 𝑒−𝑤⋅𝑥
log
𝑃 𝐶+ 𝑥
𝑃 𝐶− 𝑥
= 𝑤 ⋅ 𝑥
Logistic Regression: Find the
vector 𝑤 that maximizes the
probability of the observed data
The logistic function
Linear regression on the log-odds ratio
Logistic Regression
• Produces a probability estimate for the class
membership which is often very useful.
• The weights can be useful for understanding the
feature importance.
• Works for relatively large datasets
• Fast to apply.
NAÏVE BAYES CLASSIFIER
Bayes Classifier
• A probabilistic framework for solving classification
problems
• A, C random variables
• Joint probability: Pr(A=a,C=c)
• Conditional probability: Pr(C=c | A=a)
• Relationship between joint and conditional
probability distributions
• Bayes Theorem:
)
(
)
(
)
|
(
)
|
(
A
P
C
P
C
A
P
A
C
P 
)
Pr(
)
|
Pr(
)
Pr(
)
|
Pr(
)
,
Pr( C
C
A
A
A
C
A
C 



Bayesian Classifiers
• Consider each attribute and class label as random
variables
Tid Refund Marital
Status
Taxable
Income Evade
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
categorical
categorical
continuous
class
Evade C
Event space: {Yes, No}
P(C) = (0.3, 0.7)
Refund A1
Event space: {Yes, No}
P(A1) = (0.3,0.7)
Martial Status A2
Event space: {Single, Married, Divorced}
P(A2) = (0.4,0.4,0.2)
Taxable Income A3
Event space: R
P(A3) ~ Normal(,)
Bayesian Classifiers
• Given a record X over attributes (A1, A2,…,An)
• E.g., X = (‘Yes’, ‘Single’, 125K)
• The goal is to predict class C
• Specifically, we want to find the value c of C that maximizes
P(C=c| X)
• Maximum Aposteriori Probability estimate
• Can we estimate P(C| X) directly from data?
• This means that we estimate the probability for all possible
values of the class variable.
Bayesian Classifiers
• Approach:
• compute the posterior probability P(C | A1, A2, …, An) for all
values of C using the Bayes theorem
• Choose value of C that maximizes
P(C | A1, A2, …, An)
• Equivalent to choosing value of C that maximizes
P(A1, A2, …, An|C) P(C)
• How to estimate P(A1, A2, …, An | C )?
)
(
)
(
)
|
(
)
|
(
2
1
2
1
2
1
n
n
n
A
A
A
P
C
P
C
A
A
A
P
A
A
A
C
P


 
Naïve Bayes Classifier
• Assume independence among attributes Ai when class is
given:
• 𝑃(𝐴1, 𝐴2, … , 𝐴𝑛|𝐶) = 𝑃(𝐴1|𝐶) 𝑃(𝐴2 𝐶 ⋯ 𝑃(𝐴𝑛|𝐶)
• We can estimate P(Ai| C) for all values of Ai and C.
• New point X is classified to class c if
𝑃 𝐶 = 𝑐 𝑋 = 𝑃 𝐶 = 𝑐 𝑖 𝑃(𝐴𝑖|𝑐)
is maximum over all possible values of C.
How to Estimate Probabilities from Data?
• Class Prior Probability:
𝑃 𝐶 = 𝑐 =
𝑁𝑐
𝑁
e.g., P(C = No) = 7/10,
P(C = Yes) = 3/10
• For discrete attributes:
𝑃 𝐴𝑖 = 𝑎 𝐶 = 𝑐 =
𝑁𝑎,𝑐
𝑁𝑐
where 𝑁𝑎,𝑐 is number of
instances having attribute 𝐴𝑖 =
𝑎 and belongs to class 𝑐
• Examples:
P(Status=Married|No) = 4/7
P(Refund=Yes|Yes)=0
Tid Refund Marital
Status
Taxable
Income Evade
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
categorical
categorical
continuous
class
How to Estimate Probabilities from Data?
• For continuous attributes:
• Discretize the range into bins
• one ordinal attribute per bin
• violates independence assumption
• Two-way split: (A < v) or (A > v)
• choose only one of the two splits as new attribute
• Probability density estimation:
• Assume attribute follows a normal distribution
• Use data to estimate parameters of distribution
(i.e., mean  and standard deviation )
• Once probability distribution is known, we can use it to estimate
the conditional probability P(Ai|c)
How to Estimate Probabilities from Data?
• Normal distribution:
• One for each (ai,ci) pair
• For (Income, Class=No):
• If Class=No
• sample mean = 110
• sample variance = 2975
Tid Refund Marital
Status
Taxable
Income Evade
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
categorical
categorical
continuous
class
2
2
2
)
(
2
2
1
)
|
( ij
ij
a
ij
j
i e
c
a
A
P







0072
.
0
)
54
.
54
(
2
1
)
|
120
( )
2975
(
2
)
110
120
( 2





e
No
Income
P

Example of Naïve Bayes Classifier
• Creating a Naïve Bayes Classifier, essentially
means to compute counts:
Total number of records: N = 10
Class No:
Number of records: 7
Attribute Refund:
Yes: 3
No: 4
Attribute Marital Status:
Single: 2
Divorced: 1
Married: 4
Attribute Income:
mean: 110
variance: 2975
Class Yes:
Number of records: 3
Attribute Refund:
Yes: 0
No: 3
Attribute Marital Status:
Single: 2
Divorced: 1
Married: 0
Attribute Income:
mean: 90
variance: 25
Example of Naïve Bayes Classifier
P(Refund=Yes|No) = 3/7
P(Refund=No|No) = 4/7
P(Refund=Yes|Yes) = 0
P(Refund=No|Yes) = 1
P(Marital Status=Single|No) = 2/7
P(Marital Status=Divorced|No)=1/7
P(Marital Status=Married|No) = 4/7
P(Marital Status=Single|Yes) = 2/7
P(Marital Status=Divorced|Yes)=1/7
P(Marital Status=Married|Yes) = 0
For taxable income:
If class=No: sample mean=110
sample variance=2975
If class=Yes: sample mean=90
sample variance=25
naive Bayes Classifier:
120K)
Income
Married,
No,
Refund
( 


X
 P(X|Class=No) = P(Refund=No|Class=No)
 P(Married| Class=No)
 P(Income=120K| Class=No)
= 4/7  4/7  0.0072 = 0.0024
 P(X|Class=Yes) = P(Refund=No| Class=Yes)
 P(Married| Class=Yes)
 P(Income=120K| Class=Yes)
= 1  0  1.2  10-9 = 0
P(No) = 0.3, P(Yes) = 0.7
Since P(X|No)P(No) > P(X|Yes)P(Yes)
Therefore P(No|X) > P(Yes|X)
=> Class = No
Given a Test Record:
Naïve Bayes Classifier
• If one of the conditional probability is zero, then
the entire expression becomes zero
• Probability estimation:
m
N
mp
N
c
C
a
A
P
N
N
N
c
C
a
A
P
N
N
c
C
a
A
P
c
ac
i
i
c
ac
i
c
ac
i













)
|
(
:
estimate
-
m
1
)
|
(
:
Laplace
)
|
(
:
Original
Ni: number of attribute
values for attribute Ai
p: prior probability
m: parameter
Example of Naïve Bayes Classifier
P(Refund=Yes|No) = 4/9
P(Refund=No|No) = 5/9
P(Refund=Yes|Yes) = 1/5
P(Refund=No|Yes) = 4/5
P(Marital Status=Single|No) = 3/10
P(Marital Status=Divorced|No)=2/10
P(Marital Status=Married|No) = 5/10
P(Marital Status=Single|Yes) = 3/6
P(Marital Status=Divorced|Yes)=2/6
P(Marital Status=Married|Yes) = 1/6
For taxable income:
If class=No: sample mean=110
sample variance=2975
If class=Yes: sample mean=90
sample variance=25
naive Bayes Classifier:
120K)
Income
Married,
No,
Refund
( 


X
 P(X|Class=No) = P(Refund=No|Class=No)
 P(Married| Class=No)
 P(Income=120K| Class=No)
= 5/9  5/10  0.0072
 P(X|Class=Yes) = P(Refund=No| Class=Yes)
 P(Married| Class=Yes)
 P(Income=120K| Class=Yes)
= 4/5  1/6  1.2  10-9
P(No) = 0.7, P(Yes) = 0.3
Since P(X|No)P(No) > P(X|Yes)P(Yes)
Therefore P(No|X) > P(Yes|X)
=> Class = No
Given a Test Record: With Laplace Smoothing
Implementation details
• Computing the conditional probabilities involves
multiplication of many very small numbers
• Numbers get very close to zero, and there is a danger
of numeric instability
• We can deal with this by computing the logarithm
of the conditional probability
log 𝑃 𝐶 𝐴 ~ log 𝑃 𝐴 𝐶 + log 𝑃 𝐴
=
𝑖
log 𝐴𝑖 𝐶 + log 𝑃(𝐴)
Naïve Bayes for Text Classification
• Naïve Bayes is commonly used for text classification
• For a document 𝑑 = (𝑡1, … , 𝑡𝑘)
𝑃 𝑐 𝑑 = 𝑃(𝑐)
𝑡𝑖∈𝑑
𝑃(𝑡𝑖|𝑐)
• 𝑃 𝑡𝑖 𝑐 = Fraction of terms from all documents in c
that are 𝑡𝑖.
• Easy to implement and works relatively well
• Limitation: Hard to incorporate additional features
(beyond words).
Data Mining Lecture_10(b).pptx
Naïve Bayes (Summary)
• Robust to isolated noise points
• Handle missing values by ignoring the instance
during probability estimate calculations
• Robust to irrelevant attributes
• Independence assumption may not hold for some
attributes
• Use other techniques such as Bayesian Belief Networks
(BBN)
• Naïve Bayes can produce a probability estimate, but
it is usually a very biased one
• Logistic Regression is better for obtaining probabilities.
Generative vs Discriminative models
• Naïve Bayes is a type of a generative model
• Generative process:
• First pick the category of the record
• Then given the category, generate the attribute values from the
distribution of the category
• Conditional independence given C
• We use the training data to learn the distribution
of the values in a class
C
𝐴1 𝐴2 𝐴𝑛
Generative vs Discriminative models
• Logistic Regression and SVM are discriminative
models
• The goal is to find the boundary that discriminates
between the two classes from the training data
• In order to classify the language of a document,
you can
• Either learn the two languages and find which is more
likely to have generated the words you see
• Or learn what differentiates the two languages.
1 sur 62

Recommandé

Machine Learning: An introduction โดย รศ.ดร.สุรพงค์ เอื้อวัฒนามงคล par
Machine Learning: An introduction โดย รศ.ดร.สุรพงค์  เอื้อวัฒนามงคลMachine Learning: An introduction โดย รศ.ดร.สุรพงค์  เอื้อวัฒนามงคล
Machine Learning: An introduction โดย รศ.ดร.สุรพงค์ เอื้อวัฒนามงคลBAINIDA
1.5K vues52 diapositives
KNN presentation.pdf par
KNN presentation.pdfKNN presentation.pdf
KNN presentation.pdfAbhilashChauhan14
44 vues16 diapositives
Knn 160904075605-converted par
Knn 160904075605-convertedKnn 160904075605-converted
Knn 160904075605-convertedrameswara reddy venkat
757 vues25 diapositives
Unit-1 Introduction and Mathematical Preliminaries.pptx par
Unit-1 Introduction and Mathematical Preliminaries.pptxUnit-1 Introduction and Mathematical Preliminaries.pptx
Unit-1 Introduction and Mathematical Preliminaries.pptxavinashBajpayee1
99 vues94 diapositives
Linear Algebra and Matlab tutorial par
Linear Algebra and Matlab tutorialLinear Algebra and Matlab tutorial
Linear Algebra and Matlab tutorialJia-Bin Huang
5.5K vues63 diapositives
K nearest neighbours par
K nearest neighboursK nearest neighbours
K nearest neighboursLearnbay Datascience
47 vues13 diapositives

Contenu connexe

Similaire à Data Mining Lecture_10(b).pptx

Data Mining Lecture_9.pptx par
Data Mining Lecture_9.pptxData Mining Lecture_9.pptx
Data Mining Lecture_9.pptxSubrata Kumer Paul
4 vues33 diapositives
BAS 250 Lecture 8 par
BAS 250 Lecture 8BAS 250 Lecture 8
BAS 250 Lecture 8Wake Tech BAS
367 vues44 diapositives
K Nearest neighbour par
K Nearest neighbourK Nearest neighbour
K Nearest neighbourLearnbay Datascience
50 vues13 diapositives
The world of loss function par
The world of loss functionThe world of loss function
The world of loss function홍배 김
3.2K vues80 diapositives
Support vector machine par
Support vector machineSupport vector machine
Support vector machinePrasenjit Dey
149 vues18 diapositives
Tutorial on Object Detection (Faster R-CNN) par
Tutorial on Object Detection (Faster R-CNN)Tutorial on Object Detection (Faster R-CNN)
Tutorial on Object Detection (Faster R-CNN)Hwa Pyung Kim
7.8K vues30 diapositives

Similaire à Data Mining Lecture_10(b).pptx(20)

The world of loss function par 홍배 김
The world of loss functionThe world of loss function
The world of loss function
홍배 김3.2K vues
Tutorial on Object Detection (Faster R-CNN) par Hwa Pyung Kim
Tutorial on Object Detection (Faster R-CNN)Tutorial on Object Detection (Faster R-CNN)
Tutorial on Object Detection (Faster R-CNN)
Hwa Pyung Kim7.8K vues
Machine Learning Foundations for Professional Managers par Albert Y. C. Chen
Machine Learning Foundations for Professional ManagersMachine Learning Foundations for Professional Managers
Machine Learning Foundations for Professional Managers
Classification_Algorithms_Student_Data_Presentation par Madeleine Organ
Classification_Algorithms_Student_Data_PresentationClassification_Algorithms_Student_Data_Presentation
Classification_Algorithms_Student_Data_Presentation
Madeleine Organ96 vues
10_support_vector_machines (1).pptx par shyedshahriar
10_support_vector_machines (1).pptx10_support_vector_machines (1).pptx
10_support_vector_machines (1).pptx
shyedshahriar8 vues
Fast Single-pass K-means Clusterting at Oxford par MapR Technologies
Fast Single-pass K-means Clusterting at Oxford Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford
MapR Technologies2.8K vues
Undecidable Problems and Approximation Algorithms par Muthu Vinayagam
Undecidable Problems and Approximation AlgorithmsUndecidable Problems and Approximation Algorithms
Undecidable Problems and Approximation Algorithms
Muthu Vinayagam140 vues
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER par muthukrishnavinayaga
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWERUndecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Undecidable Problems - COPING WITH THE LIMITATIONS OF ALGORITHM POWER
Paper Study: Melding the data decision pipeline par ChenYiHuang5
Paper Study: Melding the data decision pipelinePaper Study: Melding the data decision pipeline
Paper Study: Melding the data decision pipeline
ChenYiHuang5268 vues
Sparsenet par ndronen
SparsenetSparsenet
Sparsenet
ndronen1.1K vues
Learning multifractal structure in large networks (Purdue ML Seminar) par Austin Benson
Learning multifractal structure in large networks (Purdue ML Seminar)Learning multifractal structure in large networks (Purdue ML Seminar)
Learning multifractal structure in large networks (Purdue ML Seminar)
Austin Benson384 vues

Plus de Subrata Kumer Paul

Chapter 9. Classification Advanced Methods.ppt par
Chapter 9. Classification Advanced Methods.pptChapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.pptSubrata Kumer Paul
61 vues78 diapositives
Chapter 13. Trends and Research Frontiers in Data Mining.ppt par
Chapter 13. Trends and Research Frontiers in Data Mining.pptChapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.pptSubrata Kumer Paul
27 vues52 diapositives
Chapter 8. Classification Basic Concepts.ppt par
Chapter 8. Classification Basic Concepts.pptChapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptSubrata Kumer Paul
76 vues81 diapositives
Chapter 2. Know Your Data.ppt par
Chapter 2. Know Your Data.pptChapter 2. Know Your Data.ppt
Chapter 2. Know Your Data.pptSubrata Kumer Paul
40 vues65 diapositives
Chapter 12. Outlier Detection.ppt par
Chapter 12. Outlier Detection.pptChapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.pptSubrata Kumer Paul
56 vues55 diapositives
Chapter 7. Advanced Frequent Pattern Mining.ppt par
Chapter 7. Advanced Frequent Pattern Mining.pptChapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.pptSubrata Kumer Paul
22 vues81 diapositives

Plus de Subrata Kumer Paul(20)

Chapter 13. Trends and Research Frontiers in Data Mining.ppt par Subrata Kumer Paul
Chapter 13. Trends and Research Frontiers in Data Mining.pptChapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt par Subrata Kumer Paul
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptChapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc... par Subrata Kumer Paul
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt par Subrata Kumer Paul
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptChapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt

Dernier

Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc... par
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...csegroupvn
16 vues210 diapositives
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdf par
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdfASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdf
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdfAlhamduKure
10 vues11 diapositives
Integrating Sustainable Development Goals (SDGs) in School Education par
Integrating Sustainable Development Goals (SDGs) in School EducationIntegrating Sustainable Development Goals (SDGs) in School Education
Integrating Sustainable Development Goals (SDGs) in School EducationSheetalTank1
13 vues29 diapositives
dummy.pptx par
dummy.pptxdummy.pptx
dummy.pptxJamesLamp
7 vues2 diapositives
Basic Design Flow for Field Programmable Gate Arrays par
Basic Design Flow for Field Programmable Gate ArraysBasic Design Flow for Field Programmable Gate Arrays
Basic Design Flow for Field Programmable Gate ArraysUsha Mehta
10 vues21 diapositives
Renewal Projects in Seismic Construction par
Renewal Projects in Seismic ConstructionRenewal Projects in Seismic Construction
Renewal Projects in Seismic ConstructionEngineering & Seismic Construction
8 vues8 diapositives

Dernier(20)

Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc... par csegroupvn
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
Design of Structures and Foundations for Vibrating Machines, Arya-ONeill-Pinc...
csegroupvn16 vues
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdf par AlhamduKure
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdfASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdf
ASSIGNMENTS ON FUZZY LOGIC IN TRAFFIC FLOW.pdf
AlhamduKure10 vues
Integrating Sustainable Development Goals (SDGs) in School Education par SheetalTank1
Integrating Sustainable Development Goals (SDGs) in School EducationIntegrating Sustainable Development Goals (SDGs) in School Education
Integrating Sustainable Development Goals (SDGs) in School Education
SheetalTank113 vues
Basic Design Flow for Field Programmable Gate Arrays par Usha Mehta
Basic Design Flow for Field Programmable Gate ArraysBasic Design Flow for Field Programmable Gate Arrays
Basic Design Flow for Field Programmable Gate Arrays
Usha Mehta10 vues
MongoDB.pdf par ArthyR3
MongoDB.pdfMongoDB.pdf
MongoDB.pdf
ArthyR351 vues
Trust Metric-Based Anomaly Detection via Deep Deterministic Policy Gradient R... par IJCNCJournal
Trust Metric-Based Anomaly Detection via Deep Deterministic Policy Gradient R...Trust Metric-Based Anomaly Detection via Deep Deterministic Policy Gradient R...
Trust Metric-Based Anomaly Detection via Deep Deterministic Policy Gradient R...
IJCNCJournal5 vues
Field Programmable Gate Arrays : Architecture par Usha Mehta
Field Programmable Gate Arrays : ArchitectureField Programmable Gate Arrays : Architecture
Field Programmable Gate Arrays : Architecture
Usha Mehta23 vues
IRJET-Productivity Enhancement Using Method Study.pdf par SahilBavdhankar
IRJET-Productivity Enhancement Using Method Study.pdfIRJET-Productivity Enhancement Using Method Study.pdf
IRJET-Productivity Enhancement Using Method Study.pdf
SahilBavdhankar10 vues
Design_Discover_Develop_Campaign.pptx par ShivanshSeth6
Design_Discover_Develop_Campaign.pptxDesign_Discover_Develop_Campaign.pptx
Design_Discover_Develop_Campaign.pptx
ShivanshSeth656 vues
Créativité dans le design mécanique à l’aide de l’optimisation topologique par LIEGE CREATIVE
Créativité dans le design mécanique à l’aide de l’optimisation topologiqueCréativité dans le design mécanique à l’aide de l’optimisation topologique
Créativité dans le design mécanique à l’aide de l’optimisation topologique
Ansari: Practical experiences with an LLM-based Islamic Assistant par M Waleed Kadous
Ansari: Practical experiences with an LLM-based Islamic AssistantAnsari: Practical experiences with an LLM-based Islamic Assistant
Ansari: Practical experiences with an LLM-based Islamic Assistant
M Waleed Kadous12 vues
AWS Certified Solutions Architect Associate Exam Guide_published .pdf par Kiran Kumar Malik
AWS Certified Solutions Architect Associate Exam Guide_published .pdfAWS Certified Solutions Architect Associate Exam Guide_published .pdf
AWS Certified Solutions Architect Associate Exam Guide_published .pdf
REACTJS.pdf par ArthyR3
REACTJS.pdfREACTJS.pdf
REACTJS.pdf
ArthyR339 vues

Data Mining Lecture_10(b).pptx

  • 1. DATA MINING LECTURE 10B Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines Subrata Kumer Paul Assistant Professor, Dept. of CSE, BAUET sksubrata96@gmail.com
  • 3. Illustrating Classification Task Apply Model Induction Deduction Learn Model Model Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes 10 Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10 Test Set Learning algorithm Training Set
  • 4. Instance-Based Classifiers Atr1 ……... AtrN Class A B B C A C B Set of Stored Cases Atr1 ……... AtrN Unseen Case • Store the training records • Use training records to predict the class label of unseen cases
  • 5. Instance Based Classifiers • Examples: • Rote-learner • Memorizes entire training data and performs classification only if attributes of record match one of the training examples exactly • Nearest neighbor classifier • Uses k “closest” points (nearest neighbors) for performing classification
  • 6. Nearest Neighbor Classifiers • Basic idea: • “If it walks like a duck, quacks like a duck, then it’s probably a duck” Training Records Test Record Compute Distance Choose k of the “nearest” records
  • 7. Nearest-Neighbor Classifiers  Requires three things – The set of stored records – Distance Metric to compute distance between records – The value of k, the number of nearest neighbors to retrieve  To classify an unknown record: 1. Compute distance to other training records 2. Identify k nearest neighbors 3. Use class labels of nearest neighbors to determine the class label of unknown record (e.g., by taking majority vote) Unknown record
  • 8. Definition of Nearest Neighbor X X X (a) 1-nearest neighbor (b) 2-nearest neighbor (c) 3-nearest neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x
  • 9. 1 nearest-neighbor Voronoi Diagram defines the classification boundary The area takes the class of the green point
  • 10. Nearest Neighbor Classification • Compute distance between two points: • Euclidean distance • Determine the class from nearest neighbor list • take the majority vote of class labels among the k- nearest neighbors • Weigh the vote according to distance • weight factor, w = 1/d2    i i i q p q p d 2 ) ( ) , (
  • 11. Nearest Neighbor Classification… • Choosing the value of k: • If k is too small, sensitive to noise points • If k is too large, neighborhood may include points from other classes X
  • 12. Nearest Neighbor Classification… • Scaling issues • Attributes may have to be scaled to prevent distance measures from being dominated by one of the attributes • Example: • height of a person may vary from 1.5m to 1.8m • weight of a person may vary from 90lb to 300lb • income of a person may vary from $10K to $1M
  • 13. Nearest Neighbor Classification… • Problem with Euclidean measure: • High dimensional data • curse of dimensionality • Can produce counter-intuitive results 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 vs d = 1.4142 d = 1.4142  Solution: Normalize the vectors to unit length
  • 14. Nearest neighbor Classification… • k-NN classifiers are lazy learners • It does not build models explicitly • Unlike eager learners such as decision trees • Classifying unknown records are relatively expensive • Naïve algorithm: O(n) • Need for structures to retrieve nearest neighbors fast. • The Nearest Neighbor Search problem.
  • 15. Nearest Neighbor Search • Two-dimensional kd-trees • A data structure for answering nearest neighbor queries in R2 • kd-tree construction algorithm • Select the x or y dimension (alternating between the two) • Partition the space into two with a line passing from the median point • Repeat recursively in the two partitions as long as there are enough points
  • 22. region(u) – all the black points in the subtree of u 2-dimensional kd-trees Nearest Neighbor Search
  • 23.  A binary tree:  Size O(n)  Depth O(logn)  Construction time O(nlogn)  Query time: worst case O(n), but for many cases O(logn) Generalizes to d dimensions  Example of Binary Space Partitioning 2-dimensional kd-trees Nearest Neighbor Search
  • 25. Support Vector Machines • Find a linear hyperplane (decision boundary) that will separate the data
  • 26. Support Vector Machines • One Possible Solution B1
  • 27. Support Vector Machines • Another possible solution B2
  • 28. Support Vector Machines • Other possible solutions B2
  • 29. Support Vector Machines • Which one is better? B1 or B2? • How do you define better? B1 B2
  • 30. Support Vector Machines • Find hyperplane maximizes the margin => B1 is better than B2 B1 B2 b11 b12 b21 b22 margin
  • 31. Support Vector Machines B1 b11 b12 0    b x w   1     b x w   1     b x w               1 b x w if 1 1 b x w if 1 ) (      x f || || 2 Margin w  
  • 32. Support Vector Machines • We want to maximize: • Which is equivalent to minimizing: • But subjected to the following constraints: • This is a constrained optimization problem • Numerical approaches to solve it (e.g., quadratic programming) 2 || || 2 Margin w   2 || || ) ( 2 w w L   𝑤 ∙ 𝑥𝑖 + 𝑏 ≥ 1 if 𝑦𝑖 = 1 𝑤 ∙ 𝑥𝑖 + 𝑏 ≤ −1 if 𝑦𝑖 = −1
  • 33. Support Vector Machines • What if the problem is not linearly separable?
  • 34. Support Vector Machines • What if the problem is not linearly separable? 𝜉𝑖 𝑤
  • 35. Support Vector Machines • What if the problem is not linearly separable? • Introduce slack variables • Need to minimize: • Subject to:           N i k i C w w L 1 2 2 || || ) (   𝑤 ∙ 𝑥𝑖 + 𝑏 ≥ 1 − 𝜉𝑖 if 𝑦𝑖 = 1 𝑤 ∙ 𝑥𝑖 + 𝑏 ≤ −1 + 𝜉𝑖if 𝑦𝑖 = −1
  • 36. Nonlinear Support Vector Machines • What if decision boundary is not linear?
  • 37. Nonlinear Support Vector Machines • Transform data into higher dimensional space Use the Kernel Trick
  • 39. Classification via regression • Instead of predicting the class of an record we want to predict the probability of the class given the record • The problem of predicting continuous values is called regression problem • General approach: find a continuous function that models the continuous points.
  • 40. Example: Linear regression • Given a dataset of the form (𝑥1, 𝑦1) , … , (𝑥𝑛, 𝑦𝑛) find a linear function that given the vector 𝑥𝑖 predicts the 𝑦𝑖 value as 𝑦𝑖 ′ = 𝑤𝑇𝑥𝑖 • Find a vector of weights 𝑤 that minimizes the sum of square errors 𝑖 𝑦𝑖 ′ − 𝑦𝑖 2 • Several techniques for solving the problem.
  • 41. Classification via regression • Assume a linear classification boundary 𝑤 ⋅ 𝑥 = 0 𝑤 ⋅ 𝑥 > 0 𝑤 ⋅ 𝑥 < 0 For the positive class the bigger the value of 𝑤 ⋅ 𝑥, the further the point is from the classification boundary, the higher our certainty for the membership to the positive class • Define 𝑃(𝐶+|𝑥) as an increasing function of 𝑤 ⋅ 𝑥 For the negative class the smaller the value of 𝑤 ⋅ 𝑥, the further the point is from the classification boundary, the higher our certainty for the membership to the negative class • Define 𝑃(𝐶−|𝑥) as a decreasing function of 𝑤 ⋅ 𝑥
  • 42. Logistic Regression 𝑓 𝑡 = 1 1 + 𝑒−𝑡 𝑃 𝐶+ 𝑥 = 1 1 + 𝑒−𝑤⋅𝑥 𝑃 𝐶− 𝑥 = 𝑒−𝑤⋅𝑥 1 + 𝑒−𝑤⋅𝑥 log 𝑃 𝐶+ 𝑥 𝑃 𝐶− 𝑥 = 𝑤 ⋅ 𝑥 Logistic Regression: Find the vector 𝑤 that maximizes the probability of the observed data The logistic function Linear regression on the log-odds ratio
  • 43. Logistic Regression • Produces a probability estimate for the class membership which is often very useful. • The weights can be useful for understanding the feature importance. • Works for relatively large datasets • Fast to apply.
  • 45. Bayes Classifier • A probabilistic framework for solving classification problems • A, C random variables • Joint probability: Pr(A=a,C=c) • Conditional probability: Pr(C=c | A=a) • Relationship between joint and conditional probability distributions • Bayes Theorem: ) ( ) ( ) | ( ) | ( A P C P C A P A C P  ) Pr( ) | Pr( ) Pr( ) | Pr( ) , Pr( C C A A A C A C    
  • 46. Bayesian Classifiers • Consider each attribute and class label as random variables Tid Refund Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 categorical categorical continuous class Evade C Event space: {Yes, No} P(C) = (0.3, 0.7) Refund A1 Event space: {Yes, No} P(A1) = (0.3,0.7) Martial Status A2 Event space: {Single, Married, Divorced} P(A2) = (0.4,0.4,0.2) Taxable Income A3 Event space: R P(A3) ~ Normal(,)
  • 47. Bayesian Classifiers • Given a record X over attributes (A1, A2,…,An) • E.g., X = (‘Yes’, ‘Single’, 125K) • The goal is to predict class C • Specifically, we want to find the value c of C that maximizes P(C=c| X) • Maximum Aposteriori Probability estimate • Can we estimate P(C| X) directly from data? • This means that we estimate the probability for all possible values of the class variable.
  • 48. Bayesian Classifiers • Approach: • compute the posterior probability P(C | A1, A2, …, An) for all values of C using the Bayes theorem • Choose value of C that maximizes P(C | A1, A2, …, An) • Equivalent to choosing value of C that maximizes P(A1, A2, …, An|C) P(C) • How to estimate P(A1, A2, …, An | C )? ) ( ) ( ) | ( ) | ( 2 1 2 1 2 1 n n n A A A P C P C A A A P A A A C P    
  • 49. Naïve Bayes Classifier • Assume independence among attributes Ai when class is given: • 𝑃(𝐴1, 𝐴2, … , 𝐴𝑛|𝐶) = 𝑃(𝐴1|𝐶) 𝑃(𝐴2 𝐶 ⋯ 𝑃(𝐴𝑛|𝐶) • We can estimate P(Ai| C) for all values of Ai and C. • New point X is classified to class c if 𝑃 𝐶 = 𝑐 𝑋 = 𝑃 𝐶 = 𝑐 𝑖 𝑃(𝐴𝑖|𝑐) is maximum over all possible values of C.
  • 50. How to Estimate Probabilities from Data? • Class Prior Probability: 𝑃 𝐶 = 𝑐 = 𝑁𝑐 𝑁 e.g., P(C = No) = 7/10, P(C = Yes) = 3/10 • For discrete attributes: 𝑃 𝐴𝑖 = 𝑎 𝐶 = 𝑐 = 𝑁𝑎,𝑐 𝑁𝑐 where 𝑁𝑎,𝑐 is number of instances having attribute 𝐴𝑖 = 𝑎 and belongs to class 𝑐 • Examples: P(Status=Married|No) = 4/7 P(Refund=Yes|Yes)=0 Tid Refund Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 categorical categorical continuous class
  • 51. How to Estimate Probabilities from Data? • For continuous attributes: • Discretize the range into bins • one ordinal attribute per bin • violates independence assumption • Two-way split: (A < v) or (A > v) • choose only one of the two splits as new attribute • Probability density estimation: • Assume attribute follows a normal distribution • Use data to estimate parameters of distribution (i.e., mean  and standard deviation ) • Once probability distribution is known, we can use it to estimate the conditional probability P(Ai|c)
  • 52. How to Estimate Probabilities from Data? • Normal distribution: • One for each (ai,ci) pair • For (Income, Class=No): • If Class=No • sample mean = 110 • sample variance = 2975 Tid Refund Marital Status Taxable Income Evade 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 categorical categorical continuous class 2 2 2 ) ( 2 2 1 ) | ( ij ij a ij j i e c a A P        0072 . 0 ) 54 . 54 ( 2 1 ) | 120 ( ) 2975 ( 2 ) 110 120 ( 2      e No Income P 
  • 53. Example of Naïve Bayes Classifier • Creating a Naïve Bayes Classifier, essentially means to compute counts: Total number of records: N = 10 Class No: Number of records: 7 Attribute Refund: Yes: 3 No: 4 Attribute Marital Status: Single: 2 Divorced: 1 Married: 4 Attribute Income: mean: 110 variance: 2975 Class Yes: Number of records: 3 Attribute Refund: Yes: 0 No: 3 Attribute Marital Status: Single: 2 Divorced: 1 Married: 0 Attribute Income: mean: 90 variance: 25
  • 54. Example of Naïve Bayes Classifier P(Refund=Yes|No) = 3/7 P(Refund=No|No) = 4/7 P(Refund=Yes|Yes) = 0 P(Refund=No|Yes) = 1 P(Marital Status=Single|No) = 2/7 P(Marital Status=Divorced|No)=1/7 P(Marital Status=Married|No) = 4/7 P(Marital Status=Single|Yes) = 2/7 P(Marital Status=Divorced|Yes)=1/7 P(Marital Status=Married|Yes) = 0 For taxable income: If class=No: sample mean=110 sample variance=2975 If class=Yes: sample mean=90 sample variance=25 naive Bayes Classifier: 120K) Income Married, No, Refund (    X  P(X|Class=No) = P(Refund=No|Class=No)  P(Married| Class=No)  P(Income=120K| Class=No) = 4/7  4/7  0.0072 = 0.0024  P(X|Class=Yes) = P(Refund=No| Class=Yes)  P(Married| Class=Yes)  P(Income=120K| Class=Yes) = 1  0  1.2  10-9 = 0 P(No) = 0.3, P(Yes) = 0.7 Since P(X|No)P(No) > P(X|Yes)P(Yes) Therefore P(No|X) > P(Yes|X) => Class = No Given a Test Record:
  • 55. Naïve Bayes Classifier • If one of the conditional probability is zero, then the entire expression becomes zero • Probability estimation: m N mp N c C a A P N N N c C a A P N N c C a A P c ac i i c ac i c ac i              ) | ( : estimate - m 1 ) | ( : Laplace ) | ( : Original Ni: number of attribute values for attribute Ai p: prior probability m: parameter
  • 56. Example of Naïve Bayes Classifier P(Refund=Yes|No) = 4/9 P(Refund=No|No) = 5/9 P(Refund=Yes|Yes) = 1/5 P(Refund=No|Yes) = 4/5 P(Marital Status=Single|No) = 3/10 P(Marital Status=Divorced|No)=2/10 P(Marital Status=Married|No) = 5/10 P(Marital Status=Single|Yes) = 3/6 P(Marital Status=Divorced|Yes)=2/6 P(Marital Status=Married|Yes) = 1/6 For taxable income: If class=No: sample mean=110 sample variance=2975 If class=Yes: sample mean=90 sample variance=25 naive Bayes Classifier: 120K) Income Married, No, Refund (    X  P(X|Class=No) = P(Refund=No|Class=No)  P(Married| Class=No)  P(Income=120K| Class=No) = 5/9  5/10  0.0072  P(X|Class=Yes) = P(Refund=No| Class=Yes)  P(Married| Class=Yes)  P(Income=120K| Class=Yes) = 4/5  1/6  1.2  10-9 P(No) = 0.7, P(Yes) = 0.3 Since P(X|No)P(No) > P(X|Yes)P(Yes) Therefore P(No|X) > P(Yes|X) => Class = No Given a Test Record: With Laplace Smoothing
  • 57. Implementation details • Computing the conditional probabilities involves multiplication of many very small numbers • Numbers get very close to zero, and there is a danger of numeric instability • We can deal with this by computing the logarithm of the conditional probability log 𝑃 𝐶 𝐴 ~ log 𝑃 𝐴 𝐶 + log 𝑃 𝐴 = 𝑖 log 𝐴𝑖 𝐶 + log 𝑃(𝐴)
  • 58. Naïve Bayes for Text Classification • Naïve Bayes is commonly used for text classification • For a document 𝑑 = (𝑡1, … , 𝑡𝑘) 𝑃 𝑐 𝑑 = 𝑃(𝑐) 𝑡𝑖∈𝑑 𝑃(𝑡𝑖|𝑐) • 𝑃 𝑡𝑖 𝑐 = Fraction of terms from all documents in c that are 𝑡𝑖. • Easy to implement and works relatively well • Limitation: Hard to incorporate additional features (beyond words).
  • 60. Naïve Bayes (Summary) • Robust to isolated noise points • Handle missing values by ignoring the instance during probability estimate calculations • Robust to irrelevant attributes • Independence assumption may not hold for some attributes • Use other techniques such as Bayesian Belief Networks (BBN) • Naïve Bayes can produce a probability estimate, but it is usually a very biased one • Logistic Regression is better for obtaining probabilities.
  • 61. Generative vs Discriminative models • Naïve Bayes is a type of a generative model • Generative process: • First pick the category of the record • Then given the category, generate the attribute values from the distribution of the category • Conditional independence given C • We use the training data to learn the distribution of the values in a class C 𝐴1 𝐴2 𝐴𝑛
  • 62. Generative vs Discriminative models • Logistic Regression and SVM are discriminative models • The goal is to find the boundary that discriminates between the two classes from the training data • In order to classify the language of a document, you can • Either learn the two languages and find which is more likely to have generated the words you see • Or learn what differentiates the two languages.